text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
K 2 = 1
·
so |−K = ∅. Take a hyperplane section H of X. Then there is an n ≥ 0
| �
nK| =� ∅ but |H + (n + 1)K| = ∅. Since −K ∼ an effective nonzero
s.t. |H +
·
divisor, H K < 0 and H (H + nK) is eventually negative and H + nK is not
effective. Let D ∈ |H + nK|: then |D + K| = ∅ and K · D = K(H + nK) =
K H < 0 since −K is effective, H very ample.
·
Case 2 (K 2 < 0): it is enough to find an effective divisor E on X s.t. K E < 0.
Then some component C of E will have K C < 0. The genus formula gives
−2 ≤ 2g − 2 = C(C + K) = ⇒ C 2 ≥ −1. C 2 = −1 is impossible since X is
minimal, so C 2 ≥ 0. Now (C + nK) C is negative for n >> 0, so C + nK is
not effective for n >> 0 by the useful lemma. So ∃n s.t. |C + nK| =� ∅ but
|C + (n + 1)K| = ∅. Choosing D ∈ |C + nK| gives the desired divisor.
·
·
·
We now find the claimed E. Again, let H be a hyperplane section: if K H < 0,
we can take E = H; if K | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
Again, let H be a hyperplane section: if K H < 0,
we can take E = H; if K H = 0, we can take K + nH for n >> 0; so assume
K H > 0. Let γ = −K· > 0 so that (H + γK) K = 0. Also,
·
·
·
H
·
K2
(3)
(H + γK)2 > H 2 + 2γ(H K) + γ2K = H 2 +
·
So take β rational and slightly larger than γ to get
(K · H)2
(−K 2)
> 0
(4)
(H + βK) K < (H + γK) K = 0
·
·
(since K 2 < 0) and (H + βK)2 > 0. Therefore, (H + βK) H > 0. Write β = r .
Then
·
s
(5)
(rH + sK)2 > 0, (rH + sK) K < 0, (rH + sK) H > 0
·
·
by equivalent facts for β. Let D = rH + sK. For m >> 0, by Riemann-Roch
we get h0(mD) + h0(K − mD) ≥ 1 mD(mD − K) + 1 → ∞. Moreover, K − mD
is not effect over for m >> 0 since (K − mD) H = (K H) − m(D H). Thus,
·
mD is effective for large m, and we can take E ∈ |mD|.
·
2
·
Case 3 (K 2 > 0): Assume that there is no such D as in reduction 2, i.e.
K · D ≥ 0 for every e� | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
as in reduction 2, i.e.
K · D ≥ 0 for every effective divisor D s.t. |K + D| = ∅. We will obtain a
contradiction.
Lemma 1. If X is a minimal surface with p2 = q = 0, K 2 > 0 and K D ≥ 0
for every effective divisor D on X s.t. |K + D| = ∅, then
·
(1) Pic (X) is generated by ωX = OX (K), and the anticanonical bundle
OX (−K) is ample. In particular, X doesn’t have any nonsingular ra
tional curves.
ALGEBRAIC SURFACES, LECTURE 9
3
(2) Every divisor of |−K| is an integral curve of arithmetic genus 1.
2
(3) (K 2) ≤ 5, b2 ≥ 5. (Here, b2 = het
´ (X, Q�) in general.
·
·
Proof. First, let us see that every element D of |−K| is an irreducible curve.
If not, let C be a component of D s.t. K C < 0 (which we can find, since
K · D = −K 2 < 0). If D = C + C �, |K + C| = |−D + C| = |−C �| = ∅ since C �
is effective. Also, C K < 0, contradicting the hypothesis. So D is irreducible,
and similarly D is not a multiple. Furthermore, pa(D) = 2 D(D + | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
irreducible,
and similarly D is not a multiple. Furthermore, pa(D) = 2 D(D + K) + 1 = 1,
showing (2).
Next, we claim that the only effective divisor s.t. D + K = ∅ is the zero
|
divisor. Assume not, i.e. ∃D > 0 s.t. |K + D| = ∅.
∈ D: then since
h0(−K) ≥ 1 + K 2 ≥ 2, there is a C ∈ |−K| passing through x. C is an integral
curve, and cannot be a component of D since then
|
Let x
1
(6)
|K + D| ⊃ |K + C| = |0| �= ∅
·
So C D > 0 since they meet at least in x. Then K D = −C D < 0, contradicting
the hypothesis.
·
·
As an aside, we claim that pn = 0 for all n ≥ 1: we know that p2 = 0 = ⇒
p1 = 0; if 3K were effective then 2K would be too since −K is effective, which
contradicts p2 = 0 =
p3 = 0 and by induction pn = 0 for all n ≥ 1.
⇒
We claim that adjuction terminates: if D is any divisor on X, then there is an
integer nD s.t. |D + nK| = ∅ for n ≥ nD. To see this, note that (D +nK)·(−K)
will eventually become negative. −K is represented by an irreducible curve of
positive self-intersection, so by the useful lemma D +nK is not effective for n >>
0. Now, | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
of
positive self-intersection, so by the useful lemma D +nK is not effective for n >>
0. Now, let Δ be an arbitrary effective divisor. Then ∃n ≥ 0 s.t. Δ + nK = 0
|
but |Δ + (n + 1)K| = ∅. Take D ∈ |Δ + nK| effective. |D + K| =
=
0 from above. Since any divisor is a difference of effective divisors, Pic (X) is
generated by K. If H is a hyperplane section on X, then H ∼ −nK with k > 0,
implying that −K is ample. Let C be any integral curve on X: then C ∼ −mK
1 m(m − 1)K 2 + 1 ≥ 1 so
for some m ≥ 1. pa(C) = 1 (−mK)(−mK + K) + 1 = 2
there is no smooth rational curve on X, completing (1).
∅ = ⇒
|
D
2
We are left to prove (3). Assume that (K 2) ≥ 6. Then h0(−K) ≥ 1 + K 2 ≥ 7.
singular points
y
˜
pa C) < 0 which is
(
Fix points x and y on X: we claim that ∃C ∈ |−K| with x and
of C. This would be a contradiction, since pa(C) = 1 =
absurd. So K 2 ≤ 5. To see the existence of this C, let
⇒
(7)
Ix = Ker (OX → OX,x/m 2
x), Iy = Ker (OX → OX,y/m 2
y)
Then we get, by the Chinese Remainder theorem,
(8)
0 → OX (−K) ⊗ Ix ⊗ Iy → OX (−K) → | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
theorem,
(8)
0 → OX (−K) ⊗ Ix ⊗ Iy → OX (−K) → k6 → 0
2 have dimension 3 over k. Taking the long exact sequence,
since OX,x/m2 , OX,y/my
we find that h0(OX (−K) ⊗ Ix ⊗ Iy) = 0, and get a nonzero section of that sheaf.
x
�
�
4
LECTURES: ABHINAV KUMAR
It is a divisor of zero passing through x and y with multiplicity at least 2, giving
us the claimed curve.
Finally, by Noether’s formula, 1 =
= 1 (K 2 + e(X)), where e(X) =
2 − 2b1 + b2. b1 = 2q by Hodge theory over C (in general, B1 ≤ 2q, but q =
�
0 = ⇒ b1 = 0 as well), so 10 = K 2 + b2 = ⇒ b2 ≥ 5.
χ(OX )
12
We now show that no surface has these properties. In characteristic 0, the
Lefschetz principle allows us to reduce to k = C. Taking the cohomology of the
exponential exact sequence 0
→ Z → Oan → (Oan →
1 gives
X )∗
X
H 1(O ) → H 1((O )∗) → H 2(X, Z) → H 2(O ) → · · ·
(9)
By Serre’s GAGA, H i(X, F) ∼ H i(X an , F an) for an OX -module F.
pg
= 0, h1(Oan) = h2(Oan) = 0, and
an
X
an
X
an
X
=
X
Since q
=
an
X )∗) ∼
= H 1(O∗
X ) | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
X
=
X
Since q
=
an
X )∗) ∼
= H 1(O∗
X ) = Pic X ∼
(10)
This implies that b2 = rank H 2(X, Z) = rank Pic X = 1 contradicting b2 ≥ 5.
For positive characteristic, we will sketch a proof: the first proof was given by
Zariski, and the second using ´etale cohomology by Artin and by Kurke. Our
proof will be by reduction to characteristic 0.
= H 2(X, Z)
X
H 1((O | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-algebraic-surfaces-spring-2008/01d543f81d0b743f06deed68d9eec77d_lect9.pdf |
Bluespec Tutorial: Rule
Scheduling and Synthesis
Michael Pellauer
Computer Science & Artificial Intelligence Lab
Massachusetts Institute of Technology
Based on material prepared by Bluespec Inc,
January 2005
March 4, 2005
BST-1
Improving performance via
scheduling
Latency and bandwidth can be improved by
performing more operations in each clock cycle
(cid:132) That is, by firing more rules per cycle
Bluespec schedules all applicable rules in a cycle
to execute, except when there are resource
conflicts
Therefore: Improving performance is often
about resolving conflicts found by the scheduler
March 4, 2005
BST-2
1
Viewing the schedule
The command-line flag -show-schedule can
be used to dump the schedule
Three groups of information:
(cid:132) method scheduling information
(cid:132) rule scheduling information
(cid:132) the static execution order of rules and methods
March 4, 2005
BST-3
Method scheduling info
For each method, there is an entry
like this:
name of the method
expression for the ready signal
(1 for always ready)
Method: imem_get
Ready signal: 1
Conflict-free: dmem_get, dmem_put, start, done
Sequenced before: imem_put
Conflicts: imem_get
conflict relationships
with other methods
March 4, 2005
BST-4
2
Types of conflicts
Conflict-free
(cid:132) Any methods which can execute in the same clock
cycle as the current method, in any execution order
Sequenced before
(cid:132) Any methods which can execute in the same clock
cycle, but only if they sequence before the current
method in the execution order
Sequenced after
(cid:132) Any methods which can execute in the same clock
cycle, but only if they sequence after the current
method
Conflicts
(cid:132) Any methods which cannot execute in the same clock
cycle as this method
March 4, 2005
BST-5
Rule scheduling info
For each rule, there is an entry like
this:
name of the rule
expression for the rule’s condition
Rule: fetch
Predicate: the_bf.i_notFull_ && the_started.get
Blocking rules: imem_put, start
more urgent rules which can | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
Rule: fetch
Predicate: the_bf.i_notFull_ && the_started.get
Blocking rules: imem_put, start
more urgent rules which can
block the execution of this rule
(more on urgency later)
March 4, 2005
BST-6
3
Static execution order
When multiple rules execute in a single
clock cycle, they must appear to
execute in sequence
This execution sequence is fixed at
compile-time. All rule conditions are
evaluated in this order during every
clock cycle
The final part of the schedule output is
this order
March 4, 2005
BST-7
Urgency
The compiler performs aggressive
analysis of rule boolean conditions and
is therefore aware of mutual exclusion
(i.e., when it is impossible for two rules
to be enabled simultaneously)
(cid:132) Thus, typically the compiler does not often
need to choose between competing rules
(cid:132) The compiler produces informational
messages about scheduling choices only
where necessary
March 4, 2005
BST-8
4
Viewing conflict information
The -show-schedule flag will inform you that
a rule is blocked by a conflicting rule
(cid:132) The output won’t show you why the rules conflict
The output will show you that one rule was
sequenced before another rule
(cid:132) The output won’t tell you whether the other order
was not possible due to a conflict
For conflict information,
use the -show-rule-rel flag
(cid:132) See User Guide section 8.2.2
March 4, 2005
BST-9
Scheduling conflicting rules
When two rules conflict on a shared
resource, they cannot both execute in
the same clock
The compiler produces logic that
ensures that, when both rules are
enabled, only one will fire
Which one?
(cid:132) The compiler chooses
(and informs you, during compilation)
(cid:132) The “descending_urgency” attribute allows
the designer to control the choice
March 4, 2005
BST-10
5
Demo Example 2:
Concurrent Updates
Process 0 increments register x;
Process 1 transfers a unit from register x to register y;
Process 2 decrements register y
0
+1
-1
1
+1
-1
2
x
y
rule proc0 ( | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
2 decrements register y
0
+1
-1
1
+1
-1
2
x
y
rule proc0 (cond0);
x <= x + 1;
endrule
rule proc1 (cond1);
y <= y + 1;
x <= x – 1;
endrule
rule proc2 (cond2);
y <= y – 1;
endrule
(* descending_urgency = “proc2, proc1, proc0” *)
show what happens under different urgency annotations
March 4, 2005
BST-11
Example2.bsv Demo
Compile
Example2.bsv)
(bsc
Generate Verilog (bsc -verilog -g mkExample2
Example2.bsv)
Run in vcs (See
lab3 handout)
Examine WILL_FIRE
-keep-fires (Examine CAN_FIRE)
-show-schedule
-show-rule-rel
(See manual)
March 4, 2005
Changing the predicates to True?
BST-12
6
Conditionals and rule-spliting
In Rule Semantics this rule:
rule r1 (p1);
if (q1) f.enq(x);
else g.enq(y);
endrule
1 w o
o
b
s
a
s
e
t
r
r
s
u
r
u
u l e
n l e
e
u
q
e
n f
f i r
a
o
t
g
f u ll !
n ’ t
f
h
n
e
Is equivalent to the following two rules:
rule r1a (p1 && q1);
f.enq(x);
endrule
rule r1b (p1 && ! q1);
g.enq(y);
endrule
b
t
f
u
o
t
o
b
e
u i t
o m p il e
q
n
n
c
o
i m p li c i t
t
a
v
e
h
e
c
s
r
n
o
c
s
e
t
s
e
a
s
u
c
a
e
r
t
r
n
d i t i o
e l y
t i v
March 4, | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
a
e
r
t
r
n
d i t i o
e l y
t i v
March 4, 2005
BST-13
Demo rule splitting:
Example 3
(* descending_urgency = "r1, r2" *)
// Moving packets from input FIFO i1
rule r1;
Tin x = i1.first();
if (dest(x)== 1) o1.enq(x);
else o2.enq(x);
i1.deq();
if (interesting(x)) c <= c + 1;
endrule
// Moving packets from input FIFO i2
rule r2;
Tin x = i2.first();
if (dest(x)== 1) o1.enq(x);
else o2.enq(x);
i2.deq();
if (interesting(x)) c <= c + 1;
endrule
March 4, 2005
i
e
n
m
r
e
t
e
D
e
u
e
u
Q
i
e
n
m
r
e
t
e
D
e
u
e
u
Q
+
1
Count
certain packets
a m p l e
p
o
p
k
u l e
u
t
r
r
T
r
x
o
e
w o
h
h i s
w i t
n ’ t
w o
r l y
e
p li t i n
s
g
BST-14
7
Example3.bsv Demo
Compiling
Examining FIFO signals, enables
Examining conservative conditions
(cid:132) What are the predicates for R1, R2?
-aggressive-conditions
(cid:132) What are the predicates now?
-expand-if
(cid:132) Why can certain generated rules never
March 4, 2005
fire?
BST-15
Summary of
performance tuning
If the schedule of rules is not as you expected or desire,
we have seen several ways to adjust the schedule for
improved performance:
(cid:132) Remove rule conflicts by splitting rules
(cid:132) Change rule urgency
Sometimes, an urgency warning or a conflict can be due
to a mistake or oversight by the designer
(cid:132) A rule may accidentally | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
rule urgency
Sometimes, an urgency warning or a conflict can be due
to a mistake or oversight by the designer
(cid:132) A rule may accidentally include an action which shouldn’t
be there
(cid:132) A rule may accidentally write to the wrong state element
(cid:132) A rule predicate might be missing an expression which
would make the rule mutually exclusive with a conflicting
rule
March 4, 2005
BST-16
8
Rule attributes
We have already seen the
descending_urgency attribute on rules
There are two other useful attributes which
can be applied to rules:
(cid:132) fire_when_enabled
(cid:132) no_implicit_conditions
These attributes are assertions about the rule
which bsc verifies
Does not change generated RTL
March 4, 2005
BST-17
fire_when_enabled
Asserts that the rule will always execute when
its condition is applicable
(cid:132) i.e., there are no (more urgent) conflicting rules
Can be used to guarantee that a rule will
handle some condition, by guaranteeing that
the rule fires when the condition arises
Examples:
(cid:132) To handle an unbuffered input on the interface
(cid:138) particularly in a time-based or synchronous module and
particularly when the interface is "always_enabled“
(cid:132) To handle transient situations e.g., interrupts
March 4, 2005
BST-18
9
no_implicit_conditions
Asserts that rule actions do not
introduce any implicit conditions
(cid:132) That the rule’s condition is exactly as the
user has written, and nothing more
Can be combined with the attribute
fire_when_enabled to guarantee that
the rule will fire when its explicit
condition is true
March 4, 2005
BST-19
Matching to external interfaces
... the external interface may not
use the same RDY/EN protocol as
Bluespec; interface attributes are
available to handle this situation ...
March 4, 2005
BST-20
10
Interface attributes
Useful attributes
(cid:132) always_ready
(cid:132) always_enabled
Attributes attach to a module
They apply to the interface provided by
that module – when the module is
synthesized
The attributes apply to all methods in
the interface
March 4, 2005
BST-21
always_ready
This attribute | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
synthesized
The attributes apply to all methods in
the interface
March 4, 2005
BST-21
always_ready
This attribute has two effects:
Asserts that the ready signal for all
methods is True
(cid:132) It is an error if the tool cannot prove this
Removes the associated port in the
generated RTL module
(cid:132) Any users of the module will assume a
value of True for the ready signals
(cid:132) No RDY_method signal are found
March 4, 2005
BST-22
11
always_enabled
Ties to True the enable signal for all action
methods
(cid:132) If the method cannot be executed on every cycle
(due to internal conflicts), bsc reports an error
Removes the associated port in the generated
RTL module
(cid:132) Any user of the module must execute the method on
every cycle, or it is an error
E.g. EN_method is assumed True and removed
March 4, 2005
BST-23
Interface attributes
These attributes are used to match
externally-specified port lists which
do not have RDY and EN wires
Or for a synchronous module
which should receive input on
every cycle
March 4, 2005
BST-24
12
Synchronous Binary Multiplier
Interface
interface Design_IFC;
method Action setInput (Bit#(16) x,
Bit#(16) y, Bool start);
method Bit#(32) prod();
method Bool ready();
endinterface : Design_IFC
(* always_ready,always_enabled *)
module mkDesign (Design_IFC);
module mkDesign(clk,
reset,
setInput_x,
setInput_y,
setInput_start,
prod,
ready);
March 4, 2005
BST-25
Demo Example 1:
module mkMult1 (Mult_ifc);
Reg#(Tout) product <- mkReg (0);
Reg#(Tout) d <- mkReg (0);
Reg#(Tin) r <- mkReg (0);
rule cycle (r != 0);
if (r[0] == 1) product <= product + d;
d <= d << 1;
r <= r >> 1;
endrule: cycle
method Action start (Tin d_init, Tin r_init) if (r == 0);
d <= zeroExtend | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
1;
endrule: cycle
method Action start (Tin d_init, Tin r_init) if (r == 0);
d <= zeroExtend(d_init);
r <= r_init; product <= 0;
endmethod
method Tout result () if (r == 0);
return product;
endmethod
endmodule: mkMult1
March 4, 2005
BST-26
13
Test bench for Example 1
module mkTest (Empty);
// arrays a, b contain the numbers to be multiplied and
array ab contains the correct answers.
Mult_ifc m <- mkMult1();
Reg#(Bool) busy <- mkReg(False);
Reg#(int) i <- mkReg(0); Reg#(int) j <- mkReg(0);
rule data_in (!busy);
m.start (a[i], b[i]);
i <= i+1; busy <= True;
endrule
rule data_out (busy);
Tout x = m.result();
$display (“%0.h X %0.h = %0.h Status: %0.d”,
a[j], b[j], x, x==ab[j] );
j <= j+1; busy <= False;
endrule
endmodule: mkTest
March 4, 2005
BST-27
Example1.bsv Demo
Compiling with -u
The (* synthesize *) pragma
Method RDY and EN
Making the multiplier synchronous
(* always_ready *)
Altering the testbench
(* always_enabled *)
Examining the final verilog ports
March 4, 2005
BST-28
14 | https://ocw.mit.edu/courses/6-884-complex-digital-systems-spring-2005/01e06522cd4ab035dbbbd38279045351_t03_bluespec.pdf |
Lecture 23: FaultTolerant Quantum Computation
Scribed by: Jonathan Hodges
Department of Nuclear Engineering, MIT
December 4, 2003
1
Introduction
Before von Neumann proposed classical faulttolerance in the 1940’s, it was assumed that a compu
tational device comprised of more than 106 components could not perform a computation wihout
encountering a fatal hardware error. Von Neumann proved that one could indeed make the com
putation work, as long as a certain degree of overhead was tolerable. Thus follows the classical
faulttolerance theorem:
Theorem 1 (Classical FaultTolerance). A computation of length n using perfect computational
components can be executed reliably (i.e. with probability 1 − 1
α for polynomial α ) using O(n log n)
n
steps, provided the components work with probability 1 � of the time and that the faults encountered
are independent.
We can sketch von Neumann’s proof of FaultTolerance as follows: Given classical AND,NOT,
and OR gates let us encode a 0 into many 0’s for c log n times, where c is some constant.
0
→
0000000
(1)
Now take two identical copies of “0”, call them a and b, and put them through the AND gate.
Ideally one should get 1111111. Instead, suppose the strings received are 1110101 on a and 0111111
on b. If one | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
the strings received are 1110101 on a and 0111111
on b. If one performs a bitwise AND on each successive bit of the bit strings a and b, the result is
0110101. Taking “triples” of bits of this resulting addition, one performs a majority vote. Thus, if
one bit has an error probablity of p , two bits in a triple being erroneous occurs with probabiltiy
3p2. As long as p is small, one can perform operations with faulttolerance. The same type of proof
can be shown for NOT and OR gates, thus giving universality.
In short, if one has compenents of a computer whose fidelity are high enough, and adding
additional compents is relatively easy, then the computation is indeed plausible. As it turns out
the critics were too critical. Your desktop computer does not even use a software errorcorrection
for doing computations, as the � for our siliconbased hardware has become increasingly small.
2 Using classical techniques for quantum computation
Classically, four methods exist for dealing with faulttolerant computing, but only one of these
will prove feasible. Consistency checks, like the parity of a bit string, work classically, but in the
quantum world are simply not powerful enough. Checkpoints require stoping the computation at a
1
P. Shor – 18.435/2.111 Quantum Computation – Lecture | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
a
1
P. Shor – 18.435/2.111 Quantum Computation – Lecture 23
2
specific point, checking the result, then starting the computation again. The probablistic nature of
quantum mechanics and the nocloning theorem make this technique useless for QC. Classically, one
might make many copies of the computation to perform a massive redundancy scheme; however,
this errs like consistency checks, as it is not “powerful enough” for quantum computations. Thus,
one is left with error correcting codes, which have previously been shown portable from the classical
to the quantum domain.
3 Quantum Fault Tolerance
In order to take an errorless quantum computation to a faulttolerant computation, one first encodes
each qubit into a quantum error correcting code. Every gate in the circuit should then be replaced
by a fault tolerant version of it. Finally, insert an error correction step after every gate. Above we
argued that faulttolerance in classical computations need only AND, NOT, OR gates. For universal
quantum computation only the CNOT and singlequbit gates are required, but formulating fault
tolerant operations becomes easier with a finite gate set. CNOT, Hadamard, σx, σz, T, Toffoli, and
π
8 will be proven useful.
Theorem 2 (KitaevSolovay Theorem). Given a set of gates on SU(2) ( | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
(KitaevSolovay Theorem). Given a set of gates on SU(2) ( or SU(k) generally)
that generates a dense set in SU(2), then any gate U ∈ SU(2) can be approximated to � using
O (logc 1
� ) gates where 1 ≤ c ≤ 2. See Appendix 3 of Nielsen and Chuang for more details.
3.1 Fault Tolerance of σx
In order to show that a σx gate can be done with fault tolerance, let us encode 1 qubit into k qubits
using a CSS Code (the Steane Code).
| 0� −→
| x�
�
1
�
| C2| x∈C2
�
1
�
| C2| x∈C2
| 1� −→
x + v�
|
(2)
(3)
Since dim(C1) is k, dim(C2) is k1, this code will encode a single qubit, and satisfies the inequality
0 ⊆ C2 ⊆ C1. The codewords v are those not overlapping the two classical codes; v ∈ C1 − C2.
Since the encoded 0� and 1� are orthogonal, a σx should just interchange the two encodings. These
two states are separated by v, which amounts to peforming a σx on each individual qubit. Now
suppose an error is made in performing σx on one of the qubits, where the errors on each qubit are
uncorrelated. Then | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
one of the qubits, where the errors on each qubit are
uncorrelated. Then this code will be able to correct for these errors, resulting in a quantum error
correcting code and operation that performs σx with faulttolerance.
|
|
3.2 Fault Tolerance of σz
By using the Steane code above, the equivalent of σz on an unencoded qubit is to apply σz to those
⊥. This results in a state | a�
individual qubits with a 1 in the codeword w where w ∈ C2
acquiring a phase of (− 1)a.w . Under such a transformation the code words become
⊥ − C1
1
�
�
| C2| x∈C2
�
1
| x� −→ �
| C2| x∈C2
(− 1)x.w
| x� = �
1 �
| C2| x∈C2
| x�
(4)
P. Shor – 18.435/2.111 Quantum Computation – Lecture 23
3
1
�
�
|C2| x∈C2
�
1
|x + v� −→ �
|C2| x∈C2
(−1)x.w (−1)v.w
x + v� =
|
− �
1 �
|C2| x∈C2
|
x + v�
(5)
since at least 1 vector in C1 gives v.w = 1.
3.3 Fault Tolerance of Hadamard Gate
The fault tolerance of the Hadamard gate under this CSS encoding can be seen under the | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
fault tolerance of the Hadamard gate under this CSS encoding can be seen under the additional
⊥. If the function E(x) represents the act of encoding the bit, the action of a
constraint C1 = C2
Hadamard on an encoding qubit must follow the transformations:
1
E(|0�) −→ √
2
1
E(|1�) −→ √
2
(E( 0�) + E( 1�))
|
|
(E( 0�) − E( 1�))
|
|
(6)
(7)
A Hadamard transformation of each individual qubit, H ⊗k, applied to E( 0�) will give precisely the
correct encoded transformation.
|
H �
�
1
|C2| x∈C2
|x� =
1
�k
2 2
�
(−1)x.y
|x�
1
= �
= E(
|C2|
y,x∈C2
�
⊥| y∈C1
|y�
|C2
|
|1�
0� +
√
2
)
The last line follows because E( 0�) is composed of all codeword in C2 and E( 1�) is everything in
C1, but not in C2 by definition. A Hadamard transformation on E( 1�), simply adds in the phase
factor of (−1)y.v , which obviously follows from above. This phase factor will be unity if y ∈ C2, but
1 if y ∈ (C1 − C2), thus appropriately adding a phase to the states in the code that are E( 1 | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
), thus appropriately adding a phase to the states in the code that are E( 1�).
|
|
|
|
3.4 Fault Tolerance of CNOT Gate
The σx, σz, and H gates can all be performed on a single encoded qubit with faulttolerance because
these gates are always applied to single qubits. Likewise, given two singlequbit encoded states, one
can perform CNOT operations between the kth qubit of one set, with the kth qubit of the other.
Thus there are at most two qubits interacting for a single gate, making errors independent among
the sets of qubits, and thus correctible with the CSS Code. This can be shown as follows:
U ⊗k
1
CN OT C2
|
�
|
x∈C2
|x +
va� ⊗
�
y∈C2
|
y + vb� =
=
1 �
�
|x� ⊗
|
x + y + va + vb�
|C2|
1
|C2|
y∈C2
x∈C2
� �
|x� ⊗
x∈C2
y∈C2
|y + va + vb�
P. Shor – 18.435/2.111 Quantum Computation – Lecture 23
4
If va = vb, then va + vb = 0 in binary addition and the vector is unchanged. Otherwise, the
resulting state will be a string of 1’s added to all states y�, which is just the encoding E( 1�). If
an | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
added to all states y�, which is just the encoding E( 1�). If
an error occurs in any of the twoqubit CNOT operations, this will result in va or vb not being all
0’s or all 1’s, and the CSS code will correct the the appropriate state.
|
|
4 Error Correction With FaultTolerant Precision
Both classical and quantum error correction schemes require encoding information into a code,
computing the syndrome of the code after errors may have occurred, then applying a syndrome
dependent correction step to the coding to recover the information. A simple means of checking the
syndrome would be to find the parity of a subset of qubits in a code. One could thus perform a series
of CNOT gates where a single target qubit will be flipped depending on the states of the controlled
qubits. Measurement of this ancilla qubit would unveil the syndrome, but not with faulttolerant
precision.
This can easily be seen under a Hadamard transformation, H ⊗k+1, which reverses the direction
of the CNOT gates and gives the dual CSS code in the H⊥ space. (H is the parity check matrix of
the code C1.) Due to the reversed CNOT, if any of the gates have an error, the error will propagate
forward in time due to the backaction of the CNOT. The stringent requirement of each | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
in time due to the backaction of the CNOT. The stringent requirement of each error not
affecting more than a single qubit (or pair in the FT CNOT construction) is not fulfilled.
Using the idea of single failure points between qubits, as seen in the FT CNOT construction,
we start our parity check register on k qubits in the state:
ψ� =|
1
2k−1
�
s∈Zk
2
|s�
(8)
Measurement of ψ� in the canonical basis will result in either an odd parity bit string, indicating
a syndrome of 1, or an even parity string for a 0 syndrome. The state ψ� can be created by applying
the Hadamard transformation to the “cat” state.
|
|
H ⊗k ψ� =
|
1
√
2
( 0� + 1�)
|
|
(9)
(The state x� represents a k length string of x’s.)
|
Now suppose a maximally entangled state can be created and verified by performing a few
CNOT gates between the bits of the cat state and an ancilla. Fault tolerance is not an issue here;
one only wants to know if the state is maximally entangled. Measuring 0� on the ancilla for a
reasonable number of test qubits ensures that the state is in some superposition of the states |0�
and 1�. The Hadamard transform of this | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
superposition of the states |0�
and 1�. The Hadamard transform of this state:
|
|
H ⊗k(α 0� + β 1�) = (
|
|
α + β
√
2
)
1
2k−1
�
s∈even
|s� + (
α − β
)√
2
1
2k−1
�
s∈odd
|s�
(10)
Thus if α = β, the state is all zeros and no backaction will occur. The all ones state simply
adds the ones vector to the qubits. Thus, faulttolerant measurements can be obtained. | https://ocw.mit.edu/courses/18-435j-quantum-computation-fall-2003/01e4b2ffc6380061eb0e47822d7813de_qc_lec23.pdf |
18.445 Introduction to Stochastic Processes
Lecture 13: Countable state space chains 2
Hao Wu
MIT
1 April 2015
Hao Wu (MIT)
18.445
1 April 2015
1 / 5
Recall Suppose that P is irreducible.
The Markov chain is recurrent if and only if
Px [τ +
x < ∞] = 1,
for some x.
The Markov chain is positive recurrent if and only if
Ex [τ +
x ] < ∞,
for some x.
Today’s Goal
stationary distribution
convergence to stationary distribution
Hao Wu (MIT)
18.445
1 April 2015
2 / 5
Stationary distribution
Theorem
An irreducible Markov chain is positive recurrent if and only if there
exists a probability measure π on Ω such that π = πP.
Corollary
If an irreducible Markov chain is positive recurrent, then
there exists a probability measure π such that π = πP ;
π(x) > 0 for all x. In fact,
π(x) =
1
Ex [τ +
x ]
.
Hao Wu (MIT)
18.445
1 April 2015
3 / 5
Convergence to the stationary
Theorem
If an irreducible Markov chain is positive recurrent and aperiodic, then
lim Px [Xn = y ] = π(y ) > 0,
n
for all x, y .
Theorem
If an irreducible Markov chain is null recurrent, then
lim Px [Xn = y ] = 0,
n
for all x, y .
Hao Wu (MIT)
18.445
1 April 2015
4 / 5
Convergence to the stationary
Recall Consider a Markov chain with state space Ω (countable) and
transition matrix P. For each x ∈ Ω, define
T (x
) = {n ≥ 1 : P (x, x) > 0}.
n
Then
gcd(T (x)) = gcd(T (y )),
for all x, y .
We say the chain is aperiodic if gcd(T (x)) = 1.
Theorem
Suppose that the Markov chain is irreducible and aperiodic. If the chain
is positive recurrent, then
lim ||P | https://ocw.mit.edu/courses/18-445-introduction-to-stochastic-processes-spring-2015/01eb8f31f3e72b4532887f64419f2267_MIT18_445S15_lecture13.pdf |
.
Theorem
Suppose that the Markov chain is irreducible and aperiodic. If the chain
is positive recurrent, then
lim ||Pn(x, ·) − π||TV = 0.
n
Hao Wu (MIT)
18.445
1 April 2015
5 / 5
MIT OpenCourseWare
http://ocw.mit.edu
18.445 Introduction to Stochastic Processes
Spring 2015
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-445-introduction-to-stochastic-processes-spring-2015/01eb8f31f3e72b4532887f64419f2267_MIT18_445S15_lecture13.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
18.01 Single Variable Calculus
Fall 2006
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 32: Exam 4 Review
18.01 Fall 2006
Exam 4 Review
1. Trig substitution and trig integrals.
2. Partial fractions.
3. Integration by parts.
4. Arc length and surface area of revolution
5. Polar coordinates
6. Area in polar coordinates.
Questions from the Students
• Q: What do we need to know about parametric equations?
• A: Just keep this formula in mind:
�
ds =
� �2 � �2
+
dx
dt
dy
dt
Example: You’re given
and
Find s (length).
x(t) = t4
y(t) = 1 + t
�
(4t3)2 + (1)2dt
ds =
Then, integrate with respect to t.
• Q: Can you quickly review how to do partial fractions?
• A: When finding partial fractions, first check whether the degree of the numerator is greater
than or equal to the degree of the denominator. If so, you first need to do algebraic long-
division. If not, then you can split into partial fractions.
Example.
x2 + x + 1
(x − 1)2(x + 2)
We already know the form of the solution:
x2 + x + 1
(x − 1)2(x + 2)
=
A
x − 1
+
B
(x − 1)2
+
C
x + 2
There are two coefficients that are easy to find: B and C. We can find these by the cover-up
method.
B =
12 + 1 + 1
1 + 2
=
3
3
(x
→
1)
1
Lecture 32: Exam 4 Review
18.01 Fall 2006
To find C,
C =
(−2)2 − 2 + 1
(−2 − 1)2
=
1 | https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/0202fa3893049a6a502c4f7079eca657_exam4_review.pdf |
find C,
C =
(−2)2 − 2 + 1
(−2 − 1)2
=
1
3
(x → −2)
To find A, one method is to plug in the easiest value of x other than the ones we already used
(x = 1, −2). Usually, we use x = 0.
1
A
(−1)2(2) −1
=
+
1
(−1)2
+
1/3
2
and then solve to find A.
The Review Sheet handed out during lecture follows on the next page.
2
Lecture 32: Exam 4 Review
18.01 Fall 2006
Exam 4 Review Handout
1. Integrate by trigonometric substitution; evaluate the trigonometric integral and work
backwards to the original variable by evaluating trig(trig−1) using a right triangle:
a) a2 − x2 use x = a sin u, dx = a cos u du.
b) a2 + x2 use x = a tan u, dx = a sec2 u du
c) x2 − a2 use x = a sec u, dx = a sec u tan u du
2. Integrate rational functions P/Q (ratio of polynomials) by the method of partial fractions:
If the degree of P is less than the degree of Q, then factor Q completely into linear and quadratic
factors, and write P/Q as a sum of simpler terms. For example,
3x2 + 1
(x − 1)(x + 2)2(x2 + 9)
=
A
x − 1
+
B1
(x + 2)
+
B2
(x + 2)2
+
Cx + D
x2 + 9
Terms such as D/(x2 + 9) can be integrated using the trigonometric substitution x = 3 tan u.
This method can be used to evaluate the integral of any rational function. In practice, the
hard part turns out to be factoring the denominator! In recitation you encountered two other steps
required to cover every case systematically, namely, completing the square1 and long division.2
3. Integration by parts:
� b
a | https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/0202fa3893049a6a502c4f7079eca657_exam4_review.pdf |
required to cover every case systematically, namely, completing the square1 and long division.2
3. Integration by parts:
� b
a
b
�
�
uv�dx = uv
�
�
�
a
� b
−
a
u�vdx
This is used when u�v is simpler than uv�. (This is often the case if u� is simpler than u.)
4. Arclength: ds = dx2 + dy2. Depending on whether you want to integrate with respect to
�
x, t or y this is written
�
ds =
1 + (dy/dx)2 dx; ds = (dx/dt)2 + (dy/dt)2 dt; ds = (dx/dy)2 + 1 dy
�
5. Surface area for a surface of revolution:
a) around the x-axis: 2πyds = 2πy 1 + (dy/dx)2 dx (requires a formula for y = y(x))
b) around the y-axis: 2πxds = 2πx (dx/dy)2 + 1 dy (requires a formula for x = x(y))
6. Polar coordinates: x = r cos θ, y = r sin θ (or, more rarely, r = x2 + y2, θ = tan−1(y/x))
�
a) Find the polar equation for a curve from its equation in (x, y) variables by substitution.
b) Sketch curves given in polar coordinates and understand the range of the variable θ (often
in preparation for integration).
7. Area in polar coordinates:
� θ2 1
2
θ1
r 2dθ
(Pay attention to the range of θ to be sure that you are not double-counting regions or missing
them.)
1For example, we rewrite the denominator x2 + 4x + 13 = (x + 2)2 + 9 = u2 + a2 with u = x + 2 and a = 3.
2Long division is used when the degree of P is greater than or equal to the degree of Q. It expresses P (x | https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/0202fa3893049a6a502c4f7079eca657_exam4_review.pdf |
3.
2Long division is used when the degree of P is greater than or equal to the degree of Q. It expresses P (x)/Q(x) =
P1(x) + R(x)/Q(x) with P1 a quotient polynomial (easy to integrate) and R a remainder. The key point is that the
remainder R has degree less than Q, so R/Q can be split into partial fractions.
3
�
�
�
Lecture 32: Exam 4 Review
18.01 Fall 2006
The following formulas will be printed with Exam 4
sin2 x + cos2 x = 1;
sin2 x =
1
2
−
1
2
cos 2x;
sec2 x = tan2 x + 1
1
2
cos2 x =
1
2
+
cos 2x
cos 2x = cos2 x − sin2 x;
sin 2x = 2 sin x cos x
d
dx
tan x = sec x;
2
d
dx
sec x = sec x tan x;
�
tan x dx = − ln(cos x) + c;
d
dx
�
tan−1 x =
1
;
1 + x2
d
dx
sin−1 x = √
1
1 − x2
sec x dx = ln(sec x + tan x) + c
See the next page for a review on integration of rational functions.
4
Lecture 32: Exam 4 Review
18.01 Fall 2006
Postscript: Systematic integration of rational functions
For a general rational function P/Q, the first step is to express P/Q as the sum of a polynomial
and a ratio in which the numerator has smaller degree than the denominator.
For example,
x3
x2 − 2x + 1
= x + 2 +
3x − 2
x2 − 2x + 1
(To carry out this long division, do not factor the denominator Q(x) = x2 − 2x + | https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/0202fa3893049a6a502c4f7079eca657_exam4_review.pdf |
x + 1
(To carry out this long division, do not factor the denominator Q(x) = x2 − 2x + 1, just leave it
alone.) The quotient x + 2 is a polynomial and is easy to integrate. The remainder term
3x − 2
(x − 1)2
has a numerator 3x − 2 of degree 1 which is less than the degree 2 of the denominator (x − 1)2 .
Therefore there is a partial fraction decomposition. In fact,
3x − 2
(x − 1)2
=
(3x − 3) + 1
(x − 1)2
=
3
x − 1
+
1
(x − 1)2
In general, if P has degree n and Q has degree m, then long division gives
P (x)
Q(x)
= P1(x) +
R(x)
Q(x)
in which P1, the quotient in the long division, has degree n − m and R, the remainder in the long
division, has degree at most m − 1.
Evaluation of the “simple” pieces
The integral
�
dx
(x −
a)n
=
1
−
1
−
n
(x − a)1−n + c
if n = 1 and ln x − a + c if n = 1. On the other hand the terms
|
|
�
xdx
(Ax2 + Bx + C)n
and
�
dx
(Ax2 + Bx + C)n
are handled by first completing the square:
Ax2 + Bx + C = A(x − B/2A)2 + C −
�
�
B2
4A
Using the variable u =
√
A(x − B/2A) yields combinations of integrals of the form
�
�
udu
(u2 + k2)n
and
du
(u2 + k2)n
The first integral is handled by the substitution w = u2 + k2 , dw = 2udu. The second integral can
be worked out using the trigonometric substitution u = k tan θ du = k sec2 θdθ. This then leads to
sec-tan | https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/0202fa3893049a6a502c4f7079eca657_exam4_review.pdf |
out using the trigonometric substitution u = k tan θ du = k sec2 θdθ. This then leads to
sec-tan integrals, and the actual computation for large values of n are long.
There are also other cases that we will not cover systematically. Examples are below:
1. If Q(x) = (x − a)m(x − b)n, then the expression is
A1
x − a
+
A2
(x − a)2
+
· · ·
+
Am
(x − a)m
+
B1
x − b
+
B2
(x − b)2
+
· · ·
+
Bn
(x − b)n
5
�
Lecture 32: Exam 4 Review
18.01 Fall 2006
2. If there are quadratic factors like (Ax2 + Bx + C)p, one gets terms
a1x + b1
Ax2 + Bx + C
+
a2x + b2x
(Ax2 + Bx + C)2
+
· · ·
+
apx + bp
(Ax2 + Bx + C)p
for each such factor. (To integrate these quadratic pieces complete the square and make a
trigonometric substitution.)
6 | https://ocw.mit.edu/courses/18-01-single-variable-calculus-fall-2006/0202fa3893049a6a502c4f7079eca657_exam4_review.pdf |
2.160 Identification, Estimation, and Learning
Lecture Notes No. 3
February 15, 2006
2.3 Physical Meaning of Matrix P
The Recursive Least Squares (RLS) algorithm updates the parameter vector
t y ) in such a way that the overall squared error may
(
ˆ
θ(t − 1) based on new data ϕT (t ),
be minimal. This is done by multiplying the prediction error ϕT (t )θ(t − 1) −
t y ) with the
(
gain matrix which contains matrix Pt-1. To better understand the RLS algorithm, let us
examine the physical meaning of matrix Pt-1.
ˆ
Recall the definition of the matrix:
t
−1
P
t
= Φ
T
(i ) ΦΦ
=
= ∑ ϕ ϕ
(i ) T
[ϕ( )..1 ϕ(t )] ∈R
i =1
(17)
m ×t
,Φ T ∈ R
t ×m
,ΦΦ T ∈ R
m ×m
Note that matrix ΦΦ T varies depending on how the set of vectors {ϕ(i )} span the m
dimensional space. See the figure below.
m-dim space
many
in this direction
ϕ –vector are
(
)TΦΦ
maxλ
Well traveled
)2(ϕ
)(iϕ
)1(ϕ
m –dim space
P
(
ΦΦ=
T
1)
−
New data:
)(tϕ
(min Pλ
)
(
)TΦΦ
minλ
less traveled direction
(Pλ
max
)
Geometric Interpretation of matrix P-1 .
1
mxm
Since ΦΦ T ∈ R
is a symmetric matrix of real numbers, it has all real
eigenvalues. The eigen vectors associated with the individual eigenvalues are also | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
has all real
eigenvalues. The eigen vectors associated with the individual eigenvalues are also real.
Therefore, the matrix ΦΦT can be reduced to a diagonal matrix using a coordinate
transformation, i.e. using the eigen vectors as the bases.
λ 0 L 0
1
0
M
∈ R mxm
ΦΦT ⇒ D =
M
0 L
λm
O
λ2
λ = λ ≥ λ ≥ L ≥ λ = λmin
max
2
1
(19)
(20)
0
M
1/λm
mxm
∈ R
m
1/λ
1
0
=
0 L
1/λ2
M
0 L
O
P = (ΦΦT )
−1 ⇒ D −1
The direction of λ (ΦΦT ) = The direction of λmin (P) .
max
If λmin = 0 , then det(ΦΦT ) = 0 , and the ellipsoid collapses. This implies that there is no
input data ϕ(i) in the direction of λmin , i.e. the input data set does not contain any
information in that direction. In consequence, the m-dimensional parameter vector θ
cannot be fully determined by the data set.
max
In the direction of λ , there are plenty of input data: ϕ(i)L. This direction has been
well explored, well excited. Although new data are obtained | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
input data: ϕ(i)L. This direction has been
well explored, well excited. Although new data are obtained, the correction to the
parameter vector θ(t − 1) is small, if the new input data ϕ(t) is in the same direction as
that of λ . See the second figure above.
ˆ
max
The above observations are summarized as follows:
1) Matrix P determins the gain of the prediction error feedback
ˆ
ˆ
t e )
(
θ(t) = θ(t − 1) Κ +
t
(17)
Pt−1ϕ(t)
where Kt is a varying gain matrix: Κ t = (1 +ϕ ( P
t ϕ(t))
T
)
t−1
2) If a new-data point ϕ(t) is aligned with the direction of λ (ΦΦ T ) or λmin (P ) ,
max
t−1
T
t ϕ(t) << 1 ,
−1
)ϕ ( P
t
then Κ t ≈ Pt− ϕ(t) which is small. Therefore the correction is small.
1
2
3) Matrix Pt represents how much data we already have in each direction in the m
dimensional space. The more we already know, the less the error correction gain Kt
becomes. Correction ∆θ gets smaller and smaller as t tends infinity.
2.4 Initial Conditions and Properties of RLS
a) Initial conditions for Po
Po does not have to be accurate (close to its correct value), since it is recursively
modified. But Po must be good enough to make the RLS algorithm executable. For
this,
Po must be a positive definite matrix, such as the identity matrix I.
(21)
Depending on initial values of θ (0) and Po , the (best) estimation thereafter will be
different.
ˆ | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
of θ (0) and Po , the (best) estimation thereafter will be
different.
ˆ
Question: How do the initial conditions influence the estimate? The following theorem
shows exactly how the RLS algorithm works, given initial conditions.
Theorem
The Recursive Least Squares (RLS) algorithm minimizes the following cost function:
t
1
J t (θ ) = ∑(
2
i = 1
)
(0)) P 0 ( θ θ
i y ) −ϕ (i )θ) + ( θ θ
)
(
−
−
− 1
T
T
2
(0))
1
2
(22)
where Po is an arbitrary positive definite matrix (m by m ) and θ (0) ∈ R is arbitrary.
ˆ
m
Proof Differentiating J t (θ )
dJ t (θ )
d θ
= 0
Collecting terms
∑ ϕ ϕ
(i ) T
i = 1
t
t
− ∑(
i = 1
i y ) −ϕ (i ) ϕ θ (i ) + P ( θ θ
)
)
(
− 1
0
−
(0)) = 0
T
t
(i ) + P − 1
θ = ∑ i y ) ϕ ϕ
(i ) T
(
i = 1
0
)− 1
(i ) + P θ (0)
0
The parameter vector minimizing (22) is then given by
− 1
P t
t
)
θ (t ) = P ∑ i y ) ϕ ϕ
(i ) T
i = 1
(
t
)
− 1
(i ) + P θ (0)
0
t − 1
t y )ϕ (t ) + | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
t − 1
t y )ϕ (t ) + ∑ i y ) ϕ ϕ
(i ) T
= Pt
(
i = 1
(
)− 1
(i ) + P θ (0)
0
(23)
(24)
3
− 1 ˆ − 1)
Pt
θ(t
− 1
− 1 = ϕ ϕ
T
(t )
− 1
(t ) + P t − 1
Recall Pt
)
t [ − 1 )
1)
θ (t ) = P P θ (t
+
−
P ϕ )[ (
t
(
t y
1)
+
−
Postmultiplying ϕ (t ) to both sides of (14)
)
=θ (t
t
t
t y )ϕ (t ) −
(
(t − 1)]
)
(t ) T (t )
θ ϕ ϕ
)
(t − 1)]
(t )
θ ϕ
T
) −
t
t − 1
T
( P
t ϕ (t )
)
P t − 1 ϕ ϕ
(t )
t − 1
P ϕ (t ) = P ϕ (t ) − ( 1 +ϕ ( P
ϕ (t ))
T
)
t
t − 1ϕ (t )
P
= ( 1 +ϕ ( P
t ϕ (t ))
T
) t − 1
t − 1
using (26) in (25) yields (18), the RLS algorithm,
ˆ
θ(t ) =θ (t
ˆ
−
1)
+
Pt − 1ϕ (t )
( 1 +ϕ ( P
T
) t − 1
t ϕ (t ))(
t y ) −ϕ (t )θ (t − 1))
(
ˆ
T
(25)
(26)
( | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
) −ϕ (t )θ (t − 1))
(
ˆ
T
(25)
(26)
(18)
Q.E.D.
Discussion on the Theorem of RLS
)
θ (t ) =
arg
min
θ
t
1
∑ (
i y ) − ϕ (i )θ)
(
2 i = 1
1 4 4 42
4 4 43
Squared estimation error
A
T
2
+
1
2
1
Weighted squared
)
(0)) P 0 ( θ θ
( θ θ
)
−
−
4 4 4 4
4 4 4 4
from
(0))
3 )
− 1
T
θ (0)
2
distance
B
(27)
1) As t gets larger, more data are obtained and term A gets overwhelmingly larger
than term B. As a result, the influence of initial conditions fades out.
2) In an early stage, i.e. small time index t , θ is pulled towards θ (0) , particularly
− 1are large.
)
3)
when the eigenvalues of matrix P 0
In contrast, if the eigenvalues of P0
in response to the prediction error,
T
t y ) −ϕ (t )θ .
(
− 1 are small, θ tends to change more quickly
4) The initial matrix P0 represents the level of confidence for the initial parameter
)
valueθ (0) .
Note: The P matrix involved in RLS with an initial condition Po has been extended in the
RLS theorem from the batch processing case of Pt
− 1 = ∑ ϕ ϕ
(i )
T
t | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
RLS theorem from the batch processing case of Pt
− 1 = ∑ ϕ ϕ
(i )
T
t
i = 1
− 1
P t
= ∑ ϕ ϕ
(i )
T
− 1
(i ) + P
o
t
i = 1
(i ) to:
(28)
4
Other important properties of RLS include:
• Convergence of θ (t ) . It can be shown that
lim ˆ(t )
θ
ˆ
θ (t 1)
−
−
t →∞
=
0
See Goodwin and Sin’s book, Ch.3, for proof.
• The change to the P matrix: ∆ P = P − P is negative semi-definite, i.e.
t
t − 1
(t ) T
t
( P
P t − 1 ϕ ϕ
)
t − 1
∆ P = − ( 1 +ϕ ( P
t ϕ (t )) ≤ 0
T
) t − 1
m
for an arbitrary ϕ (t ) ∈ R and positive definite Pt-1 .
Exercise Prove this property.
2.5 Estimation of Time-varying Parameters
(29)
(30)
5
Least Squares with Exponential Data Weighting
Forgetting factor: α
0
<
α
≤
1
(31)
Large α, for slowly
changing processes
Small α, for rapidly
parameters/processes
Weighted Squared Error
t
i
t
−
J t (θ) = ∑α e 2 (i )
ˆθ (t ) =
i = 1
arg
min
θ
J (θ )
t
θ ˆ(t ) is given by the following recursive algorithm.
ˆ
1)
+
−
ˆ
θ(t ) = θ (t
Pt − 1ϕ (t )
( ϕ α
+ T
) t − 1
( P
(t ) T
1
t
)
( P
Pt − 1 ϕ ϕ
t − 1 | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
) T
1
t
)
( P
Pt − 1 ϕ ϕ
t − 1
α Pt − 1 − ( ϕ α
t ϕ (t ))
+ T
) t − 1
( P
Exercise: Obtain (34) and (35) from (32) and (33).
t ϕ (t ))(
Pt =
t y ) −ϕ (t )θ (t − 1))
(
ˆ
T
(32)
(33)
(34)
(35)
A drawback of the forgetting factor approach
When the system under consideration enters “steady state”, the matrix
(t ) T
P t − 1 ϕ ϕ
t
( P
)
t − 1
tends to the null matrix. This implies
Pt ≈
1
α
Pt − 1
(36)
As α<1, 1/ α makes Pt larger than Pt-1 . Therefore { Pt } begins to increase exponentially.
The “Blow-Up” problem
Remedy:
Covariance Re-setting Approach
6
• The forgetting factor approach has the “Blow-Up” problem
• The ordinary RLS
The P matrix gets small after some iterations (typically 10-20 iterations). Then the
gain dramatically reduces, and θ is no longer varying.
)
The Covariance Re-Setting method is to solve these shortcomings by occasionally
re-setting the P matrix to:
P ∗ = kI
t
0 < k < ∞
(37)
This re-vitalizes the algorithm.
2.6 Orthogonal Projection
The RLS algorithm provides an iterative procedure to converge to its final parameter
value. This may take more than m ( dimension of θ )steps.
The Orthogonal Projection algorithm provides the least squares solution exactly in m
recursive steps
Assume
= Φ
m
[ϕ (1) ϕ (2) K ϕ (m )]
Spanning the whole m -dim space
Set P0 =I (the m xm identity matrix) and θ(0) arbitrary
Compute
� | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
ning the whole m -dim space
Set P0 =I (the m xm identity matrix) and θ(0) arbitrary
Compute
ˆ
ˆ
θ (t ) = θ (t
ˆ
−
1)
+
Pt −
1ϕ (t )
t ϕ (t )
ϕ ( P
)
t − 1
T
t y ) −ϕ (t )θ (t − 1))
(
(
ˆ
T
(38)
(39)
where matrix Pt-1 is updated with the same recursive formula as RLS
Note that +1 involved in the denominator of RLS is eliminated in (39)
This causes a numerical problem when ϕ ( P
t ϕ (t ) is small, the gain is large.
) t − 1
T
2-parameter example
This orthogonal projection algorithm is more efficient, but is very sensitive to noisy data.
Ill-conditioned when ϕ ( P
t ϕ (t ) ≈ 0 . RLS is more robust.
) t − 1
T
7
2.6 Multi-Output, Weighted Least Squares Estimation
y1(t)
y2(t)
yl(t)
l -output
y 1
v
ty ) = M ∈ R
(
y l
l
T
(t )θ
Ψ ∈ R l
× m
(40)
M
M
For each output
ˆ i (
ty
) = ϕ T (t )θ
i
T
ϕ 1
vˆ(ty ) = M θ Ψ=
T
ϕ
v(
ty ) Ψ−
T (t )θ
Error
e 1
r
te ) = M = | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
e 1
r
te ) = M =
(
e
Consider that each squared error is weighted differently, or
Weighted Multi-Output Squared Error:
) r(
r T
J t (θ ) = ∑ e (
ieWi
ˆ
θ (t ) = min
r
) =∑ (
T
(
iy ) Ψ− θ ) W (
r(
iy ) Ψ− θ )
ˆ
θ (t )
i = 1
T
T
t
t
Β Ρ =
t
t
J (θ )
t
i = 1
arg
θ
=Ρ
t
T
t
∑ Ψ ( Wi Ψ (i )
)
i = 1
t
− 1
T
) r(
iyWi
Β t = ∑ Ψ (
The recursive algorithm
)
i = 1
ˆ
ˆ
θ (t ) = θ (t − 1)
ΨΡ +
t
t − 1 Ρ − t − 1Ψ (t )[ W
Ρ =Ρ
t
− 1
T
(t )W (
(t )θ (t − 1))
r(
ˆ
ty ) Ψ−
T (t )Ρ Ψ (t )]− 1
Ψ T (t )Ρ
Ψ+
t − 1
t − 1
(41)
(42)
(43)
(44)
8 | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/0223ccdb1348e84e3a5f04fc0af46edf_lecture_3.pdf |
2.160 Identification, Estimation, and Learning
Lecture Notes No. 1
February 8, 2006
Mathematical models of real-world systems are often
too difficult to build based on first principles
alone.
Figure by MIT OCW.
System Ident cation;
“Let the data speak about the system”.
ifi
Figure by MIT OCW.
Image removed for copyright reasons.
HVAC
Courtesy of Prof. Asada. Used with permission.
Physical Modeling
1. Passive elements: mass, damper, spring
2. Sources
3. Transducers
4. Junction structure
Physically meaningful parameters
s G ) =
(
s Y
(
)
(s U
)
= 0
m
s b
s n
m −1
+ s b
+ L + b
1
+ s a n −1 + L + a
1
n
m
ai = ai (M , K B
)
)
, K B
b
bi = M
,
i
(
,
1
System Identification
Input u( t)
Output y( t)
Black Box
G (s ) =
Y (s )
U (s )
= 0
m
+ s b m −1 + L + b
s b
s n + s a n −1 + L + a
n
1
1
m
Physical
modeling
Comparison
Black Box
Pros
1. Physical insight and knowledge
2. Modeling a conceived system
before hardware is built
Cons
1. Often leads to high system order
2.
with too many parameters
Input-output model has a
complex parameter structure
3. Not convenient for parameter
tuning
4. Complex system; too difficult to
analyze
Pros
1. Close to the actual input-output
behavior
2. Convenient structure for
parameter tuning
3. Useful for complex systems; too
difficult to build physical model
Cons
1. No direct connection to physical
parameters
2. No solid ground to support a
model structure
3. Not available until an actual
system has been built
2
Introduction: System Identification in a Nutshell
b 3 | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/024038d962b1e7d63130fc746e03ef71_lecture_1.pdf |
structure
3. Not available until an actual
system has been built
2
Introduction: System Identification in a Nutshell
b 3
b 2
b 1
u(t )
y(t )
FIR
Finite Impulse Response Model
t y ) =
(
(
t u b
1
− 1) +
(
t u b
2
− 2)
⋅ ⋅ ⋅ +
t u − m )
⋅ ⋅ ⋅ + b
m
(
Define
θ := [b , b ,
2
1
]T
m
⋅ ⋅ ⋅ , b ∈ R
m
ϕ (t ) := [
t u − ),1
(
t u − ), 2
(
⋅ ⋅ ⋅ ,
m
t u − m )] ∈ R
(
T
t
unknown
known
Vectorθ collectively represents model parameters to be identified based on observed data
y(t) and ϕ(t ) for a time interval of 1 ≤ t ≤ N .
Observed data: y ( ),1
⋅ ⋅ ⋅ ⋅
, y (N )
⋅⋅
Estimate θ
Estimation
ˆ(
t y ) = ϕ (t ) θ
T
This predicted output may be different from the actual y(t) .
Find θ that minimize VN (θ )
N1
VN (θ ) = ∑ (
N t = 1
(
t y ) −
t y )) 2
ˆ(
ˆθ = avg minV (θ )
N
θ
dVN (θ )
d θ
= 0
N
1
VN (θ ) = ∑ (
N t = 1
t y ) −ϕ (t )θ )
(
T
}
2
N2
∑ (
N t = 1
N
t = 1
t y ) −ϕ (t )θ )(−ϕ ) = 0
(
T
∑ t y )ϕ (t ) = ∑ (� | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/024038d962b1e7d63130fc746e03ef71_lecture_1.pdf |
)θ )(−ϕ ) = 0
(
T
∑ t y )ϕ (t ) = ∑ (ϕ (t )θ )ϕ (t )
(
T
N
t = 1
3
N
∑ (ϕ(t )ϕ (t )
t =1
T
N
θ = ∑ t y )ϕ(t )
(
t =1
=
RN
∴ θN = RN ∑ t y )ϕ(t )
−1
(
ˆ
N
t =1
Question1 What will happen if we repeat the experiment and obtain θˆ again?
N
Consider the expectation of θN when the experiment is repeated many times?
ˆ
ˆ
Average of θN
Would that be the same as the true parameter θ ?0
Let’s assume that the actual output data are generated from
T
t y ) = ϕ (t )θ + t e )
(
(
0
θ0 is considered to be the true value.
Assume that the noise sequence {e(t)} has a zero mean value, i.e. E[e(t)]=0, and has no
correlation with input sequence {u(t)}.
N
θN = RN ∑ t y )ϕ(t ) = RN ∑[(ϕ (t )θ +
ˆ
(
−1
−1
T
0
t e ))ϕ(t )]
(
N
t =1
t =1
−1
N
θ + RN ∑ϕ(
= RN ∑ϕ(t )ϕ (t )
t =1
t =1
−1
T
0
N
)
t e
t
) (
RN
∴ θN −θ0 = RN ∑ϕ(
−1
ˆ
) (
t e
)
t
N
t =1
Taking expectation
E [ | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/024038d962b1e7d63130fc746e03ef71_lecture_1.pdf |
∑ϕ(
−1
ˆ
) (
t e
)
t
N
t =1
Taking expectation
E [θ −θ ] = E RN ∑ϕ(
−1
0
ˆ
N
N
t =1
)
t e
t
) (
−1
= RN ∑ϕ(t ) ⋅ E [
N
t =1
(
t e )] = 0
Question2 Since the true parameter θ is unknown, how do we know how close
0
ˆ
N will be toθ ? How many data points, N , do we need to reduce the errorθ −θ to a
θ
0
0
ˆ
N
certain level?
4
Consider the variance (the covariance matrix) of the parameter estimation error.
PN = E [( ˆ
θ N −θ )( ˆ
T
θ N −θ ) ]
0
0
N
− 1
RN ∑ϕ (
= E
t
) (
t e
t = 1
N
R ∑ϕ (
) ⋅
− 1
N
s = 1
s
) (
s e
T
)
N
N
= E RN ∑∑ϕ (
t
t e
) ( ) (
s e
− 1
T
)ϕ (s )R − 1
N
− 1
t = 1 s = 1
N
N
[
= RN ∑∑ϕ ( E
t e
( ) (
s e
t
)
t = 1 s = 1
T
)]ϕ | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/024038d962b1e7d63130fc746e03ef71_lecture_1.pdf |
t e
( ) (
s e
t
)
t = 1 s = 1
T
)]ϕ (s )
R − 1
N
Assume that {e(t)} is stochastically independent
[ ( ) (
t e E
s e
)] =
)] = 0
s e
t e
( ) (
2
t ≠ s
E [
E [e (t )] = λ t = s
Then PN = RN ∑ϕ (t )λϕ (t )RN = λRN
− 1
T
− 1
N
− 1
t = 1
As N increases, RN tends to blow out, but RN/N converges under mild assumptions.
N1
lim ∑ϕ (t )ϕ (t ) = lim
N ∞ → N t = 1
1
N ∞ → N
T
RN = R
For large N , R ≅ RN
N
, RN
− 1 ≅
1
N
R
Nθˆ
0θ
PN =
λ
N
R − 1 for large N .
N
5
I. The covariance PN decays at the rate 1/N.
Parameters approach he limiting value at the rate of
1
N
II. The covariance is inversely proportional to
PN ∝
λ
magnitude
R
R
=
N
r11 K
M O
mr 1 K
mr1
M
mmr
rij =
N
∑
t 1=
t u ( −
i ) ( −
j)
t u
III. The convergence of
ˆ
θN
to
θ0 may be accelerated if we design inputs such that R is
large. | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/024038d962b1e7d63130fc746e03ef71_lecture_1.pdf |
. The convergence of
ˆ
θN
to
θ0 may be accelerated if we design inputs such that R is
large.
IV. The covariance does not depend on the average of the input signal. Only the
second moment
What will be addressed in 2.160?
A) How to best estimate the parameters
What type of input is maximally informative?
•
Informative data sets
• Persistent excitation
• Experiment design
• Pseudo Random Binary signals, Chirp sine waves, etc.
How to best tune the model / best estimate parameters
How to best use each data point
• Covariance analysis
• Recursive Least Squares
• Kalman filters
• Unbiased estimate
• Maximum Likelihood
6
B). How to best determine a model structure
How do we represent system behavior? How do we parameterize the model?
i. Linear systems
• FIR, ARX, ARMA, BJ,…..
• Data compression: Laguerre series expansion
ii. Nonlinear systems
• Neural nets
• Radial basis functions
iii. Time-Frequency representation
• Wavelets
Model order: Trade-off between accuracy/performance and reliability/robustness
• Akaike’s Information Criterion
• MDL
7 | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/024038d962b1e7d63130fc746e03ef71_lecture_1.pdf |
Introduction to C++
Massachusetts Institute of Technology
January 12, 2011
6.096
Lecture 5 Notes: Pointers
1 Background
1.1 Variables and Memory
When you declare a variable, the computer associates the variable name with a particular
location in memory and stores a value there.
When you refer to the variable by name in your code, the computer must take two steps:
1. Look up the address that the variable name corresponds to
2. Go to that location in memory and retrieve or set the value it contains
C++ allows us to perform either one of these steps independently on a variable with the &
and * operators:
1. &x evaluates to the address of x in memory.
2. *( &x ) takes the address of x and dereferences it – it retrieves the value at that
location in memory. *( &x ) thus evaluates to the same thing as x.
1.2 Motivating Pointers
Memory addresses, or pointers, allow us to manipulate data much more flexibly; manipulat
ing the memory addresses of data can be more efficient than manipulating the data itself.
Just a taste of what we’ll be able to do with pointers:
• More flexible pass-by-reference
• Manipulate complex data structures efficiently, even if their data is scattered in differ
ent memory locations
• Use polymorphism – calling functions on data without knowing | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
ffer
ent memory locations
• Use polymorphism – calling functions on data without knowing exactly what kind of
data it is (more on this in Lectures 7-8)
2 Pointers and their Behavior
2.1 The Nature of Pointers
Pointers are just variables storing integers – but those integers happen to be memory ad
dresses, usually addresses of other variables. A pointer that stores the address of some
variable x is said to point to x. We can access the value of x by dereferencing the pointer.
As with arrays, it is often helpful to visualize pointers by using a row of adjacent cells to
represent memory locations, as below. Each cell represents 1 block of memory. The dot-
arrow notation indicates that ptr “points to” x – that is, the value stored in ptr is 12314,
x’s memory address.
ptr
x
... 12309 12310 12311 12312 12313 12314
...
2.2 Pointer Syntax/Usage
2.2.1 Declaring Pointers
To declare a pointer variable named ptr that points to an integer variable named x:
int * ptr = & x ;
int *ptr declares the pointer to an integer value, which we are initializing to the address
of x.
We can have pointers to values of any type. The general scheme for declaring pointers is:
data_type * pointer_name ; // Add "= initial_value " if applicable
pointer name is then a variable of type data type * – a “pointer to a data type value.”
2 | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
a variable of type data type * – a “pointer to a data type value.”
2.2.2 Using Pointer Values
Once a pointer is declared, we can dereference it with the * operator to access its value:
cout << * ptr ; // Prints the value pointed to by ptr ,
// which in the above example would be x ’s value
We can use deferenced pointers as l-values:
* ptr = 5; // Sets the value of x
2
Without the * operator, the identifier x refers to the pointer itself, not the value it points
to:
cout << ptr ; // Outputs the memory address of x in base 16
Just like any other data type, we can pass pointers as arguments to functions. The same
way we’d say void func(int x) {...}, we can say void func(int *x){...}. Here is an
example of using pointers to square a number in a similar fashion to pass-by-reference:
1 void squareByPtr ( int * numPtr ) {
* numPtr = * numPtr * * numPtr ;
2
3 }
4
5 int main () {
int x = 5;
6
squareByPtr (& x ) ;
7
cout << x ; // Prints 25
8
9 }
Note the varied uses of the * operator on line 2.
2.2.3 const Pointers
There are two places the const keyword can be placed within a pointer variable declaration.
This is because there are two different | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
keyword can be placed within a pointer variable declaration.
This is because there are two different variables whose values you might want to forbid
changing: the pointer itself and the value it points to.
const int * ptr ;
declares a changeable pointer to a constant integer. The integer value cannot be changed
through this pointer, but the pointer may be changed to point to a different constant integer.
int * const ptr ;
declares a constant pointer to changeable integer data. The integer value can be changed
through this pointer, but the pointer may not be changed to point to a different constant
integer.
const int * const ptr ;
forbids changing either the address ptr contains or the value it points to.
3
2.3 Null, Uninitialized, and Deallocated Pointers
Some pointers do not point to valid data; dereferencing such a pointer is a runtime error.
Any pointer set to 0 is called a null pointer, and since there is no memory location 0, it is an
invalid pointer. One should generally check whether a pointer is null before dereferencing it.
Pointers are often set to 0 to signal that they are not currently valid.
Dereferencing pointers to data that has been erased from memory also usually causes runtime
errors. Example:
1
2
3
4
int * myFunc () {
int phantom = 4;
return & phantom ;
}
phantom is deallocated when myFunc exits, so the pointer the function returns is invalid.
As with any other variable, the value of a | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
pointer the function returns is invalid.
As with any other variable, the value of a pointer is undefined until it is initialized, so it
may be invalid.
3 References
When we write void f(int &x) {...} and call f(y), the reference variable x becomes
another name – an alias – for the value of y in memory. We can declare a reference variable
locally, as well:
int y ;
int & x = y ; // Makes x a reference to , or alias of , y
After these declarations, changing x will change y and vice versa, because they are two names
for the same thing.
References are just pointers that are dereferenced every time they are used. Just like point
ers, you can pass them around, return them, set other references to them, etc. The only
differences between using pointers and using references are:
• References are sort of pre-dereferenced – you do not dereference them explicitly.
• You cannot change the location to which a reference points, whereas you can change
the location to which a pointer points. Because of this, references must always be
initialized when they are declared.
• When writing the value that you want to make a reference to, you do not put an &
before it to take its address, whereas you do need to do this for pointers.
4
3.1 The Many Faces of * and &
The usage of the * and & operators with pointers/references | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
3.1 The Many Faces of * and &
The usage of the * and & operators with pointers/references can be confusing. The * operator
is used in two different ways:
1. When declaring a pointer, * is placed before the variable name to indicate that the
variable being declared is a pointer – say, a pointer to an int or char, not an int or
char value.
2. When using a pointer that has been set to point to some value, * is placed before the
pointer name to dereference it – to access or set the value it points to.
A similar distinction exists for &, which can be used either
1. to indicate a reference data type (as in int &x;), or
2. to take the address of a variable (as in int *ptr = &x;).
4 Pointers and Arrays
The name of an array is actually a pointer to the first element in the array. Writing
myArray[3] tells the compiler to return the element that is 3 away from the starting el
ement of myArray.
This explains why arrays are always passed by reference: passing an array is really passing
a pointer.
This also explains why array indices start at 0: the first element of an array is the element
that is 0 away from the start of the array.
4.1 Pointer Arithmetic
Pointer arithmetic is a way of using subtraction and addition of pointers to move around
between locations in memory, typically | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
of using subtraction and addition of pointers to move around
between locations in memory, typically between array elements. Adding an integer n to a
pointer produces a new pointer pointing to n positions further down in memory.
4.1.1 Pointer Step Size
Take the following code snippet:
1
2
3
long arr [] = {6 ,
long * ptr = arr ;
ptr ++;
0 , 9 , 6};
5
4 long * ptr2 = arr + 3;
When we add 1 to ptr in line 3, we don’t just want to move to the next byte in memory,
since each array element takes up multiple bytes; we want to move to the next element in
the array. The C++ compiler automatically takes care of this, using the appropriate step
size for adding to and subtracting from pointers. Thus, line 3 moves ptr to point to the
second element of the array.
Similarly, we can add/subtract two pointers: ptr2 - ptr gives the number of array elements
between ptr2 and ptr (2). All addition and subtraction operations on pointers use the
appropriate step size.
4.1.2 Array Access Notations
Because of the interchangeability of pointers and array names, array-subscript notation (the
form myArray[3]) can be used with pointers as well as arrays. When used with pointers, it
is referred to as pointer-subscript notation.
An alternative is pointer-offset notation, in which you explicitly add your offset to the pointer
and dereference the resulting address. For | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
notation, in which you explicitly add your offset to the pointer
and dereference the resulting address. For instance, an alternate and functionally identical
way to express myArray[3] is *(myArray + 3).
4.2 char * Strings
You should now be able to see why the type of a string value is char *: a string is actually
an array of characters. When you set a char * to a string, you are really setting a pointer
to point to the first character in the array that holds the string.
You cannot modify string literals; to do so is either a syntax error or a runtime error,
depending on how you try to do it. (String literals are loaded into read-only program memory
at program startup.) You can, however, modify the contents of an array of characters.
Consider the following example:
char courseName1 [] = { ’6 ’ , ’. ’ , ’0 ’ , ’9 ’ , ’6 ’ , ’ \0 ’ };
char * courseName2 = " 6.096 " ;
Attempting to modify one of the elements courseName1 is permitted, but attempting to
modify one of the characters in courseName2 will generate a runtime error, causing the
program to crash.
6
MIT OpenCourseWare
http://ocw.mit.edu
6.096 Introduction to C++
January (IAP) 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/0240aeefb6d5fb9c0a20587ed98fa7ca_MIT6_096IAP11_lec05.pdf |
2.092/2.093 — Finite Element Analysis of Solids & Fluids I
Fall ‘09
Lecture 5 - The Finite Element Formulation
Prof. K. J. Bathe
MIT OpenCourseWare
In this system, (X, Y, Z) is the global coordinate system, and (x, y, z) is the local coordinate system for the
element i.
We want to satisfy the following equations:
τij,j + fi
τij nj = fi
B = 0
Sf
�
in V
on Sf
�
ui � = ui
Su
τij = f (εkl)
Su
→
Equilibrium Conditions
→
→
Compatibility Conditions
(A)
Stress-strain Relations
Then we have the exact solution.
Principle of Virtual Displacements
�
V
εT CεdV =
�
V
�
u T f B dV +
Sf
u Sf T f Sf dSf
(B)
Here, real stresses (Cε) are in equilibrium with the external forces (f B , f Sf ). Note that Eq. (B) is equivalent
to Eq. (A). Recall that we defined
�
�
εT =
εxx εyy εzz γxy γyz γzx
εT =
�
ε¯xx ε¯yy ε¯zz γ¯xy γ¯yz γ¯zx
�
�
∂u
∂x
=
�
. . .
Basic assumptions:
⎡
u(x, y, z)
⎤(m)
u(m) = ⎣ v(x, y, z) ⎦ = H (m) uˆ
n×1
3×n
w(x, y, z)
(1)
1
Lecture 5
2.092/2.093, Fall ‘09
The Finite Element Formulation
⎡
⎤
uˆ =
⎢
⎢
⎢
⎢
⎢
⎢
� | https://ocw.mit.edu/courses/2-092-finite-element-analysis-of-solids-and-fluids-i-fall-2009/0252bdf2e6c34bca6819c5be82f3bcb0_MIT2_092F09_lec05.pdf |
uˆ =
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
u1
v1
w1
. . .
uN
vN
wN
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
N is the number of nodes (3N = n) and H is the displacement interpolation matrix. For the moment, let’s
assume Su = 0. We use
�
uˆT = u1 u2 u3
. . . un
�
Then, we obtain
We also assume
ε (m) = B (m) uˆ
n×1
6×n
6×1
= H (m)u ¯ˆ
(m)u¯
ε¯ (m) = B (m) uˆ¯
n×1
6×n
6×1
where B is the strain-displacement matrix. Substitute equations (1) through (4) into (B):
�
Σ
m V
ε¯(m)T C(m)ε(m)dV (m) =
�
�
Σ u¯(m)T f B(m)dV (m)+ΣΣ
m V
m i Si(m)
f
u¯Si(m)T f Si(m)
f
f
dSi(m)
f
(2)
(3)
(4)
(B*)
(m) . The equation now becomes
where i sums over the element surfaces composing Sf
�
� �
u ¯ˆ T Σ
B(m)T C(m)B(m)dV (m) uˆ =
m V (m)
� �
u ¯ˆ T Σ
m V (m)
H (m)T f B(m)dV (m)+ Σ Σ
�
H Si(m)T f Si(m)
f
f
dSi(m)
f | https://ocw.mit.edu/courses/2-092-finite-element-analysis-of-solids-and-fluids-i-fall-2009/0252bdf2e6c34bca6819c5be82f3bcb0_MIT2_092F09_lec05.pdf |
V (m)+ Σ Σ
�
H Si(m)T f Si(m)
f
f
dSi(m)
f
�
m i Si(m)
f
uˆ is the unknown to be found. When evaluated on Sf
i(m) ,
i(m)
u¯Sf
= H Sf
i(m)
u ¯ˆ
With the transformed equation above, we can insert the following identity matrices:
H Si(m)
f
= H (m)
�
�
Sf
i(m)
�
Let u ¯ˆ T =
Then u ¯ˆ T =
Then u¯ˆ =
T
1 0
�
�
0
�
0
. . . 0
1 0
. . . 0
0 0
1
. . . 0
�
�
→
Gives the first equation to solve for
→
Gives the second equation
→
Gives the third equation
. . . and so on.
We finally obtain Kuˆ = R. Now, let’s drop off the hat!
2
Lecture 5
The Finite Element Formulation
2.092/2.093, Fall ‘09
KU = R
�
; K(m) =
V (m)
B(m)T C(m)B(m)dV (m)
K = Σ K(m)
m
R = RB + RS
RB = Σ R(m)
m
B
; R(m) =
B
�
H (m)T f B(m)dV (m)
m) = Σ
; R(
S
i
V (m)
�
i(m)
Sf
H Si(m)T f Si(m)
f
f
i(m)
dSf
m)
RS = Σ R(
S
m
Example 4.5
Reading assignment: Section 4.2
For this system, we can define U T = [ u1 u2 u3 ]. We want to find:
u(1)(x) = H (1 | https://ocw.mit.edu/courses/2-092-finite-element-analysis-of-solids-and-fluids-i-fall-2009/0252bdf2e6c34bca6819c5be82f3bcb0_MIT2_092F09_lec05.pdf |
T = [ u1 u2 u3 ]. We want to find:
u(1)(x) = H (1) ⎢
⎡
⎡
⎤
⎤
u1
u1
⎦ ; u(2)(x) = H (2) ⎢
⎣ u2 ⎥
⎣ u2 ⎥
⎦
u3
u3
3
MIT OpenCourseWare
http://ocw.mit.edu
2.092 / 2.093 Finite Element Analysis of Solids and Fluids I
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/2-092-finite-element-analysis-of-solids-and-fluids-i-fall-2009/0252bdf2e6c34bca6819c5be82f3bcb0_MIT2_092F09_lec05.pdf |
Lecture 1
8.821/8.871 Holographic duality
Fall 2014
8.821/8.871 Holographic duality
MIT OpenCourseWare Lecture Notes
Hong Liu, Fall 2014
Lecture 1
1: HINTS FOR HOLOGRAPHY
In this chapter, we will get a favor of the holographic duality. We first study gravity system and derive black hole
thermodynamics where holography principle emerges. Then we investigate gauge theory in the large N (t’ Hooft)
limit. At last, we compare such a theory and the string theory and give hints on holographic duality.
1.1: PRELUDE: GRAVITY V.S. OTHER INTERACTIONS
Let us first do a simple exercise: which of the following interactions does not belong to the group, and why?
a) electromagnetism
b) weak interaction
c) strong interactions
d) gravity
The answer would be d). a)-c) are all interactions that can be described by gauge theories in fixed spacetime
(Minkowski spacetime):
Quantum electrodynamics (QED): U (1) gauge field + Dirac fields,
Electroweak interaction: SU (2) × U (1),
Strong interaction: SU (3).
And the basic theoretical structure is well understood by Path-integral formalism plus Wilsonian Renormalization
Group. Then any calculation can be reduced to this algorithm, although it does not mean we can necessarily
perform it. On the other hand, gravity is quite different. From theory of general gravity (GR):
Classical gravity = Spacetime
(1)
However, how do we understand quantum gravity? Spacetime here should become dynamical. There are many
puzzling questions. Is spacetime fundamental or emergent? Is it continuous or discrete? What is the quantum
nature of black holes? How did the universe begin? One intriguing feature about gravity is that it is the weakest
interaction, which may be a fundamental aspect.
In 1997, Juan Maldacena discovered the famous duality:
Quantum gravity in Anti de Sitter spacetime = Field theories (many-body system) in a fixed spacetime
(2)
The two sides should be be considered as different descriptions of the same quantum system. | https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/0252c3933c4e097296c1f43cfbc98e37_MIT8_821S15_Lec1.pdf |
in a fixed spacetime
(2)
The two sides should be be considered as different descriptions of the same quantum system. This duality provides
a ”unification”, which has far-reaching implications for both sides of the equation. Maldacena’s original paper has
been cited by more than 10,000 times in SLAC database. But the subject is still in its infancy, and many
elementary issues are not yet understood. In a sense it is still like magic. Eq. (2), when totally understood, will be
comparable to other milestones of physics, e.g. Newton’s universal gravity, Maxwell’s electromagnetism,
Boltzman’s statistical mechanics , Einstein’s relativity, etc.
The goal of the course:
• Motivate and “derive” the duality.
• Work out the dictionary and develop tools for the duality.
• Understand physical implications for both sides of the duality.
• Learn important features.
• Examine open questions.
1
Lecture 1
8.821/8.871 Holographic duality
Fall 2014
Emergence of gravity
Duality (2) implies that quantum gravity plus spacetime can emerge from a non-gravitational system. The idea
itself is not new: In 1967, A. Sakharov observed that certain condensed matter systems have mathematical
descriptions similar to those in GR. This led to a natural question: could GR arise as an effective description of
some condensed matter systems? In 1950’s, people already speculated that GR is a macroscopic description just
like hydrodynamics.
From field theory perspective, it is natural to ask whether massless spin-2 particles (gravitons) can arise as bound
states in a theory of massless spin-1 (photons, gluons) and spin- 1 particles (protons, electrons). If the answer is
yes, we can conclude that gravity can be emergent. For example, in Quantum Chromodyanmics (QCD) , there are
indeed massive spin-2 excitations. Could one tweak such a theory that massless spin-2 particles emerge? Such
hopes were however dashed by a powerful theorem of Weinberg and Witten [1].
2
Theorem 1 : A theory that allows the construction of a Lorentz-covariant conserved 4-vector current J | https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/0252c3933c4e097296c1f43cfbc98e37_MIT8_821S15_Lec1.pdf |
itten [1].
2
Theorem 1 : A theory that allows the construction of a Lorentz-covariant conserved 4-vector current J µ cannot
´
contain massless particles of spin > 1 with non-vanishing values of the conceived charge
2
J 0d3x.
Theorem 2 : A theory that allows a conserved Lorentz-covariant stress tensor T µν cannot contain massless par-
ticles of spin > 1.
Remarks:
1. The theorems apply to both ”elementary” and ”composite” particles.
2. The theorems are consistent with the fact that massless photons in QED = Maxwell + Dirac fields, as photons
do not carry any charge.
3. The theorems are consistent with Yang-Mills (YM) theory. Consider SU (2) YM: Aa
µ, a = 1, 2, 3 as gauge
µ) are massless spin-1 fields charged under U (1) subgroup generated by
. But there does not exist a conserved, Lorentz-covariant, gauge invariant current for this U (1) symmetry.
µ = 1√ (A1
2
µ ± iA2
fields. For example, A±
σ3
2
(see pset 1)
4. It does not forbid gravitons from GR. In GR, there is no conserved, Lorentz-covariant stress tensor.
5. Theorem 2 indicates that none of renormalizable quantum field theories (QFTs) in Minkowski spacetime can
have an emergent graviton. That is why no matter how we tweak QCD, this can never happen.
6. A hidden assumption of the theorems is that those particles live in the same spacetime as the original theory.
This loophole was utilized by holographic duality.
Proof.
Suppose we have such a theory that allows Lorentz-covariant conserved current and stress tensor, and there exist
massless particles of spin-J. One-particle state are denoted as
We have
|k, σ(cid:105),
kµ = (k0, k),
σ = ±j(helicity)
ˆR(θ, k)|k, σ(cid:105) = eiσθ|k, σ(cid:105) | https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/0252c3933c4e097296c1f43cfbc98e37_MIT8_821S15_Lec1.pdf |
j(helicity)
ˆR(θ, k)|k, σ(cid:105) = eiσθ|k, σ(cid:105)
ˆ
ˆ
ˆ
where R(θ, k) is the rotational operator by an angle θ around k = k . More about representations of Poincare
group can be found in Ref. [2]. The conserved, Lorentz-covariant current is J µ, with the conserved charge
ˆQ =
J 0d3x; the conserved, Lorentz-covariant stress tensor is T µν, with the conserved momentum
´
ˆP µ =
T 0µd3x. Then
|k|
´
ˆ
If |k, σ(cid:105) is charged under the symmetry generated by J µ with charge q:
ˆP µ|k, σ(cid:105) = kµ|k, σ(cid:105)
We want to show that:
ˆQ|k, σ(cid:105) = q|k, σ(cid:105)
2
(3)
(4)
(5)
(6)
Lecture 1
8.821/8.871 Holographic duality
Fall 2014
1. if q (cid:54)= 0, j (cid:54) 1
2
2. j (cid:54) 1
First we claim that Lorentz invariance implies:
(cid:104)k, σ|J µ|k(cid:48), σ(cid:105) −k−→−k
→
(cid:48)
(cid:104)k, σ|T µν|k(cid:48), σ(cid:105)
k→k(cid:48)
−−−→
µ
qk
k0
kµkν
k0
1
(2π)3
1
(2π)3
(7)
(8)
where (cid:104)k, σ|k(cid:48), σ(cid:105) = δσ,σ(cid:48) δ(3)(k − k(cid:48)). You need to prove the claim in pest 1, one self-consistency check would be
when looking at 0-component of Eq. (7), we have (cid:104)k, σ|J 0|k(cid:48 | https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/0252c3933c4e097296c1f43cfbc98e37_MIT8_821S15_Lec1.pdf |
when looking at 0-component of Eq. (7), we have (cid:104)k, σ|J 0|k(cid:48), σ(cid:105) −k−→−k→ q
.
(cid:48)
(2π)3
For massless particles, k2 = k(cid:48)2 = 0. This implies that kµkµ
such that k + k(cid:48) = 0 and kµ = (E, 0, 0, E), k(cid:48)µ = (E, 0, 0, −E). In this frame, a rotation by θ around the z-axis has
the effect:
(cid:48) < 0, i.e. k + k(cid:48) is timelike. We can choose a frame,
ˆR(θ)|
k, j(cid:105) = eijθ|k, j(cid:105),
ˆ
R(θ)|k(cid:48), j(cid:105) = e−ijθ|k(cid:48), j(cid:105)
Now consider
Then we have
(cid:104)k(cid:48), j| ˆR−1(θ)J µ ˆR(θ)|k, j(cid:105)
e2ijθ(cid:104)k(cid:48), j|J µ|k, j(cid:105) = Λµ
ν (θ)(cid:104)k(cid:48), j|J ν|k, j(cid:105)
here Λµ
ν (θ) is defined by the rotational transformation acting on a vector by a angle θ around z-axis, i.e.
Λµ
ν (θ) =
1
0
0
cos θ
0 − sin θ
0
0
0
sin θ
cos θ
0
0
0
0
1
Similarly:
e2ijθ(cid:104)k(cid:48), j|T µν|k, j(cid:105) = Λµ
λ(θ)(cid:104)k(cid:48), j|T ρλ|k, j(cid:105) | https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/0252c3933c4e097296c1f43cfbc98e37_MIT8_821S15_Lec1.pdf |
= Λµ
λ(θ)(cid:104)k(cid:48), j|T ρλ|k, j(cid:105)
ρ (θ)Λν
ˆ
ν only has eigenvalues e±iθ, 1, thus (cid:104)k(cid:48), j|R−1(θ)J µR(θ)|k, j(cid:105) can only be nonzero if j (cid:54) 1 . Otherwise,
2
, j|R−1(θ)T µν ˆR(θ)|k, j(cid:105) can only be nonzero if j (cid:54) 1. Otherwise, Eq. (8) is
(13)
ˆ
Note that Λµ
Eq. (7) is contradicted. (cid:104)k(cid:48)
contradicted. Thus we have proved the theorem.
ˆ
Weinberg-Witten Theorem forbids the existence of massless spin-2 particles, which is a hallmark of gravity, in the
same spacetime a QFT lives. But there is a loophole: emergent gravity can live in a different spacetime, as in a
holographic duality. We are not yet ready to go there without some preparations:
1. Black hole thermodynamics ⇒ holographic principle.
2. Large N gauge theories ⇒ gauge/string duality.
3. A bit of string theory which would be useful for building intuitions and perspectives.
References
[1] S. Weinberg and E. Witten, Physics Letter B. 96 (1-2): 59-62 (1980).
[2] S. Weinberg, The Theory of Quantum Fields, Cambridge University Press (2005).
3
(9)
(10)
(11)
(12)
MIT OpenCourseWare
http://ocw.mit.edu
8.821 / 8.871 String Theory and Holographic Duality
Fall 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-821-string-theory-and-holographic-duality-fall-2014/0252c3933c4e097296c1f43cfbc98e37_MIT8_821S15_Lec1.pdf |
Consequence of Electrons as Waves on
Free Electron Model
• Boundary conditions will produce quantized energies for
all free electrons in the material
• Two electrons with same spin can not occupy same
electron energy (Pauli exclusion principle)
Imagine 1-D crystal for now
Traveling wave picture
Standing wave picture
ei(kx-ωt)
e-i(kx+ωt)
ei(kx-ωt)+ e-i(kx+ωt) =e-iωt(eikx+ e-ikx)
= e-iωt(2coskx)
Since material is usually big and electron small, traveling wave picture used
©1999 E.A. Fitzgerald
1
Consequence of Electrons as Waves on
Free Electron Model
Traveling wave picture
Standing wave picture
0
L
0
L
Ψ(x) = Ψ(x + L)
eikx = eik ( x+ L)
1 = eikL
2πn
L
k =
Just having a boundary condition means that k and E are quasi-continuous,
i.e. for large L, they appear continuous but are discrete
©1999 E.A. Fitzgerald
2
Representation of E,k for 1-D Material
states
electrons
E =
2k 2
h
2m
=
p 2
2m
E
All e- in box accounted for
EF
m=+1/2,-1/2
En+1
En
En-1
dE
dk
=
2
k
h
m
2
k
h
m
Δ
k
Quasi-continuous
Δk=2π/L
E
=Δ
kF
kF
k
©1999 E.A. Fitzgerald
Total number of electrons=N=2*2kF*L/2π
3
Representation of E,k for 1-D Material
E =
;k =
2 k 2
h
2m
2
k
h
m
=
N
=
2
Lk
F
π
g(E) =
2 m
dN dk 1
dk dE L π h
=
2m
=
2k πh
dE
dk
1
−
2 | https://ocw.mit.edu/courses/3-225-electronic-and-mechanical-properties-of-materials-fall-2007/028c4235617140f9b44c51728098cc4f_lecture_4.pdf |
dk dE L π h
=
2m
=
2k πh
dE
dk
1
−
2
E
2mE
h
g(E)=density of states=number of electrons per energy per length
n =
2k
N
=
L π
F =
2 2mEF
hπ
or kF =
nπ
2
•n=is the number of electrons per unit length, and is
determined by the crystal structure and valence
•The electron density, n, determines the energy and velocity of
the highest occupied electron state at T=0
©1999 E.A. Fitzgerald
4
Representation of E,k for 2-D Material
2
(
k
= h
E
k
+
2
x
2
m
2
y
)
E(kx,ky)
ky
kx
©1999 E.A. Fitzgerald
5
Representation of E,k for 3-D Material
Ε(kx,ky,kz)
kz
kF
2π/L
g(E) =
mE
2
m
3
2π
h
)3
kF = (3π2n
1
vF = hkF
m
E = h
F
2
2kF
2m
ky
(2
hE =
2 + k z
2 )
2 + k y
k x
2m
kx
Fermi Surface or Fermi Sphere
TF =
EF
kB
©1999 E.A. Fitzgerald
6
So how have material properties changed?
• The Fermi velocity is much higher than
kT even at T=0! Pauli Exclusion raises
the energy of the electrons since only 2
e- allowed in each level
• Only electrons near Fermi surface can
interact, i.e. absorb energy and
contribute to properties
TF~104K (Troom~102K),
EF~100Eclass, vF
2
2~100vclass
Element
rs /ao
Li
Na
K
Rb
Cs
Cu
Ag
Au
Be
Mg
Ca
Sr
Ba
Nb
Fe
Mn
Zn | https://ocw.mit.edu/courses/3-225-electronic-and-mechanical-properties-of-materials-fall-2007/028c4235617140f9b44c51728098cc4f_lecture_4.pdf |
Rb
Cs
Cu
Ag
Au
Be
Mg
Ca
Sr
Ba
Nb
Fe
Mn
Zn
Cd
Hg
3.25
3.93
4.86
5.20
5.62
2.67
3.02
3.01
1.87
2.66
3.27
3.57
3.71
3.07
2.12
2.14
2.30
2.59
2.65
eF
4.74 eV
3.24
2.12
1.85
1.59
7.00
5.49
5.53
14.3
7.08
4.69
3.93
3.64
5.32
11.1
10.9
9.47
7.47
7.13
TF
5.51 x 104 K
3.77
2.46
2.15
1.84
8.16
6.38
6.42
16.6
8.23
5.44
4.57
4.23
6.18
13.0
12.7
11.0
8.68
8.29
kF
1.12 x 108 cm-1
0.92
0.75
0.70
0.65
1.36
1.20
1.21
1.94
1.36
1.11
1.02
0.98
1.18
1.71
1.70
1.58
1.40
1.37
nF
1.29 x 108 cm/sec
1.07
0.86
0.81
0.75
1.57
1.39
1.40
2.25
1.58
1.28
1.18
1.13
1.37
1.98
1.96
1.83
1.62
1.58
8.63
8.15
11.7
10.4
Al
Ga
In
Tl
Sn
Pb
Bi
Sb
2.07
2.19
2.41
2.48
2.22
2.30
2.25
2.14 | https://ocw.mit.edu/courses/3-225-electronic-and-mechanical-properties-of-materials-fall-2007/028c4235617140f9b44c51728098cc4f_lecture_4.pdf |
2.07
2.19
2.41
2.48
2.22
2.30
2.25
2.14
13.6
12.1
10.0
9.46
11.8
11.0
11.5
12.7
Fermi energies, fermi temperatures, fermi waves vectors, and fermi velocities for representative metals*
* The table entries are calculated from the values of rs / a0 given in Table 1.1 using m = 9.11 x 10-28 grams.
1.75
1.66
1.51
1.46
1.64
1.58
1.61
1.70
2.03
1.92
1.74
1.69
1.90
1.83
1.87
1.96
9.47
9.90
10.2
10.9
©1999 E.A. Fitzgerald
Table by MIT OpenCourseWare.
7
Effect of Temperature (T>0): Coupled electronic-thermal
properties in conductors (i.e. cv)
• Electrons at the Fermi surface are able to increase energy: responsible for
properties
Fermi-Dirac distribution
•
• NOT Bolltzmann distribution, in which any number of particles can occupy
each energy state/level
Originates from:
EF
...N possible configurations
T=0
T>0
f =
1
( E −E
F )
e k T +1
b
If E-EF/kbT is
large (i.e. far from
EF) than
−( E −EF )
kbT
f = e
8
©1999 E.A. Fitzgerald
Fermi-Dirac Distribution: the Fermi Surface
when T>0
f(E)
1
0.5
fBoltz
kbT
kbT
T=0
All these e- not
perturbed by T
T>0
μ~EF
Boltzmann-like tail, for
the larger E-EF values
E
Heat capacity of metal (which is ~ heat capacity of free e- in a metal):
cv = ⎜
⎛ ∂U
⎞
⎟
⎝ | https://ocw.mit.edu/courses/3-225-electronic-and-mechanical-properties-of-materials-fall-2007/028c4235617140f9b44c51728098cc4f_lecture_4.pdf |
e- in a metal):
cv = ⎜
⎛ ∂U
⎞
⎟
⎝ ∂T ⎠v
U ~ ΔE ⋅ ΔN ~ kbT ⋅[g(EF )⋅ kbT ] ~ g(EF )⋅(kbT )2
U=total energy of
electrons in system
⎛
cv = ⎜
⎝
U
∂
T
∂
⎞
⎟
⎠v
©1999 E.A. Fitzgerald
= 2 ⋅ g(EF ) ⋅ kb
2T
Right dependence, very close to exact derivation
9
Heat Capacity (cv) of electrons in Metal
• Rough derivation shows cv~const. x T , thereby giving correct
dependence
• New heat capacity is about 100 times less than the classical
expectation
Exact derivation:
c =
v
2
π
3
⋅ k 2T ⋅ g(EF )
b
b
nk
3
cvclass =
2
cvquant π2 ⎛
kbT ⎞
⎜⎜
2 ⎝ EF ⎠
⎟⎟nkb
=
3 EF ~ 100@ RT
π2 kbT
©1999 E.A. Fitzgerald
10 | https://ocw.mit.edu/courses/3-225-electronic-and-mechanical-properties-of-materials-fall-2007/028c4235617140f9b44c51728098cc4f_lecture_4.pdf |
6.087 Lecture 8 – January 21, 2010
Review
Pointers
Void pointers
Function pointers
Hash table
1
Review:Pointers
• pointers: int x; int∗ p=&x;
• pointers to pointer: int x; int∗ p=&x;int∗∗ pp=&p;
• Array of pointers: char∗ names[]={"abba","u2"};
• Multidimensional arrays: int x [20][20];
1
Review: Stacks
•
LIFO: last in first out data structure.
items are inserted and removed from the same end.
•
• operations: push(),pop(),top()
• can be implemented using arrays, linked list
2
Review: Queues
•
FIFO: first in first out
items are inserted at the rear and removed from the front.
•
• operations: queue(),dequeue()
• can be implemented using arrays, linked list
3
Review: Expressions
• Infix: (A+B)*(C-D)
• prefix: *+AB-CD
• postfix: AB+CD-*
4
6.087 Lecture 8 – January 21, 2010
Review
Pointers
Void pointers
Function pointers
Hash table
5
Void pointers
•
C does not allow us to declare and use void variables.
• void can be used only as return type or parameter of a
function.
• C allows void pointers
• Question: What are some scenarios where you want to
pass void pointers?
• void pointers can be used to point to any data type
int x; void∗ p=&x; /∗points to int ∗/
•
• float f ;void∗ p=&f; /∗points to float ∗/
• void pointers cannot be dereferenced. The pointers should
always be cast before dereferencing.
void∗ p; printf ("%d",∗p); /∗ invalid ∗/
void∗ p; int ∗px=(int∗)p | https://ocw.mit.edu/courses/6-087-practical-programming-in-c-january-iap-2010/02aef29b821a0258e53ba95a648207f9_MIT6_087IAP10_lec08.pdf |
("%d",∗p); /∗ invalid ∗/
void∗ p; int ∗px=(int∗)p; printf ("%d",∗px); /∗valid ∗/
5
Function pointers
• In some programming languages, functions are first class
variables (can be passed to functions, returned from
functions etc.).
• In C, function itself is not a variable. But it is possible to
declare pointer to functions.
• Question: What are some scenarios where you want to
pass pointers to functions?
• Declaration examples:
•
•
int (∗fp )( int ) /∗notice the () ∗/
int (∗fp )( void∗,void∗)
• Function pointers can be assigned, pass to and from
functions, placed in arrays etc.
6
Callbacks
Definition: Callback is a piece of executable code passed to
functions. In C, callbacks are implemented by passing function
pointers.
Example:
void qsort(void∗ arr, int num,int size, int (∗fp )( void∗ pa,void∗pb))
• qsort() function from the standard library can be sort an
array of any datatype.
• Question: How does it do that? callbacks.
• qsort() calls a function whenever a comparison needs to
be done.
• The function takes two arguments and returns (<0,0,>0)
depending on the relative order of the two items.
7
Callback (cont.)
i n t a r r [ ] = { 1 0 , 9 , 8 , 1 , 2 , 3 , 5 } ;
/ ∗ c a l l b a c k ∗ /
i n t asc ( void ∗ pa , void ∗ pb )
{
r e t u r n ( ∗ ( i n t ∗ ) pa − ∗ ( i n t ∗ ) pb ) ;
}
/ ∗ c a l l b a c k ∗ /
i n t desc ( void ∗ pa , void ∗ pb )
{
r e t u r n ( ∗ ( i n t ∗ | https://ocw.mit.edu/courses/6-087-practical-programming-in-c-january-iap-2010/02aef29b821a0258e53ba95a648207f9_MIT6_087IAP10_lec08.pdf |
void ∗ pa , void ∗ pb )
{
r e t u r n ( ∗ ( i n t ∗ ) pb − ∗ ( i n t ∗ ) pa ) ;
i n ascending o r d e r ∗ /
}
/ ∗ s o r t
q s o r t ( a r r , s i z e o f ( a r r ) / s i z e o f ( i n t ) , s i z e o f ( i n t ) , asc ) ;
/ ∗ s o r t
q s o r t ( a r r , s i z e o f ( a r r ) / s i z e o f ( i n t ) , s i z e o f ( i n t ) , desc ) ;
i n descending o r d e r ∗ /
8
Callback (cont.)
Consider a linked list with nodes defined as follows:
s t r u c t node {
i n t data ;
s t r u c t node∗ n e x t ;
} ;
Also consider the function ’apply’ defined as follows:
void a p p l y ( s t r u c t node∗ phead ,
void ( ∗ f p ) ( void ∗ , void ∗ ) ,
void ∗ arg )
/ ∗ o n l y
f p has t o be named ∗ /
{
}
s t r u c t node∗ p=phead ;
while ( p ! =NULL )
{
f p ( p , arg ) ;
p=p−>n e x t ;
}
/ ∗ can a l s o use ( ∗ f p ) ( p , arg ) ∗ /
9
Callback (cont.)
Iterating:
s t r u c t node∗ phead ;
/ ∗ p o p u l a t e somewhere ∗ /
void p r i n t ( void ∗ p , void ∗ arg )
{
s t r u c t node∗ np =( s t r u c t node ∗ | https://ocw.mit.edu/courses/6-087-practical-programming-in-c-january-iap-2010/02aef29b821a0258e53ba95a648207f9_MIT6_087IAP10_lec08.pdf |
void ∗ arg )
{
s t r u c t node∗ np =( s t r u c t node ∗ ) p ;
p r i n t f ( "%d " , np−>data ) ;
}
a p p l y ( phead , p r i n t , NULL ) ;
10
Callback (cont.)
Counting nodes:
void d o t o t a l ( void ∗ p , void ∗ arg )
{
s t r u c t node∗ np =( s t r u c t node ∗ ) p ;
i n t ∗ p t o t a l
∗ p t o t a l += np−>data ;
=( i n t ∗ ) arg ;
}
i n t
a p p l y ( phead , d o t o t a l ,& t o t a l ) ;
t o t a l =0;
11
Array of function pointers
Example:Consider the case where different functions are called
based on a value.
enum TYPE{SQUARE, RECT, CIRCILE ,POLYGON } ;
s t r u c t shape {
f l o a t params [MAX ] ;
enum TYPE t y p e ;
} ;
void draw ( s t r u c t shape∗ ps )
{
switch ( ps−>t y p e )
{
case SQUARE:
draw_square ( ps ) ; break ;
case RECT:
d r a w _ r e c t ( ps ) ; break ;
. . .
}
}
12
Array of function pointers
The same can be done using an array of function pointers
instead.
void ( ∗ f p [ 4 ] ) ( s t r u c t shape∗ ps )=
{& draw_square ,& draw_rec ,& d r a w _ c i r c l e ,& draw_poly } ;
typedef void ( ∗ f p ) ( s t r u c t shape∗ ps ) drawfn ;
drawfn f p [ 4 ] =
{& draw_square ,& draw_rec ,& d r a w _ c i r c l e , | https://ocw.mit.edu/courses/6-087-practical-programming-in-c-january-iap-2010/02aef29b821a0258e53ba95a648207f9_MIT6_087IAP10_lec08.pdf |
drawfn f p [ 4 ] =
{& draw_square ,& draw_rec ,& d r a w _ c i r c l e ,& draw_poly } ;
void draw ( s t r u c t shape∗ ps )
{
( ∗ f p [ ps−>t y p e ] ) ( ps ) ;
/ ∗ c a l l
t h e c o r r e c t
f u n c t i o n ∗ /
}
13
6.087 Lecture 8 – January 21, 2010
Review
Pointers
Void pointers
Function pointers
Hash table
14
Hash table
Hash tables (hashmaps) combine linked list and arrays to
provide an efficient data structure for storing dynamic data.
Hash tables are commonly implemented as an array of linked
lists (hash tables with chaining).
Figure: Example of a hash table with chaining (source: wikipedia)
14
keysJohn SmithLisa SmithSam DoeSandra DeeTed Bakerbuckets000001002::151152153154::253254255entriesLisa Smith521-8976John Smith521-1234Sandra Dee521-9655Ted Baker418-4165Sam Doe521-5030Hash table
• Each data item is associated with a key that determines its
location.
• Hash functions are used to generate an evenly distributed
hash value.
•
A hash collision is said to occur when two items have the
same hash value.
• Items with the same hash keys are chained
• Retrieving an item is O(1) operation.
15
Hash tables
Hash functions:
• A hash function maps its input into a finite range: hash
value, hash code.
• The hash value should ideally have uniform distribution.
why?
• Other uses of hash functions: cryptography, caches
(computers/internet), bloom filters etc.
• Hash function types:
• Division type
• Multiplication type
• Other ways to avoid collision: linear probing, double
hashing.
16
Hash table: example
# define MAX_BUCKETS 1000
# define MULTIPLIER 31
s t r u c t wordrec
{
char ∗ word ;
unsigned long count ;
s | https://ocw.mit.edu/courses/6-087-practical-programming-in-c-january-iap-2010/02aef29b821a0258e53ba95a648207f9_MIT6_087IAP10_lec08.pdf |
MULTIPLIER 31
s t r u c t wordrec
{
char ∗ word ;
unsigned long count ;
s t r u c t wordrec ∗ n e x t ;
} ;
/ ∗ hash bucket ∗ /
s t r u c t wordrec ∗ t a b l e [ MAX_LEN ] ;
17
Hash table: example
unsigned long h a s h s t r i n g ( const char ∗ s t r )
{
unsigned long hash =0;
while ( ∗ s t r )
{
}
hash= hash∗MULTIPLIER+∗ s t r ;
s t r ++;
r e t u r n hash%MAX_BUCKETS;
}
18
Hash table: example
s t r u c t wordrec ∗
{
lookup ( const char ∗ s t r , i n t c r e a t e )
s t r u c t wordrec ∗ c u r r =NULL ;
unsigned long hash= h a s h s t r i n g ( s t r ) ;
s t r u c t wordrec ∗ wp= t a b l e [ hash ] ;
f o r ( c u r r =wp ; c u r r ! =NULL ; c u r r = c u r r −>n e x t )
/ ∗ search ∗ / ;
n o t f ou n d :
i f ( c r e a t e )
/ ∗ add t o
r e t u r n c u r r ;
}
f r o n t ∗ /
19
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIT OpenCourseWare
http://ocw.mit.edu
6.087 Practical Programming in C
January (IAP) 2010
For information about citing these materials or our Terms of Use,visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-087-practical-programming-in-c-january-iap-2010/02aef29b821a0258e53ba95a648207f9_MIT6_087IAP10_lec08.pdf |
7. Rational Cherednik algebras and Hecke algebras for varieties with
group actions
7.1. Twisted differential operators. Let us recall the theory of twisted differential oper
ators (see [BB], section 2).
Let X be a smooth affine algebraic variety over C. Given a closed 2-form ω on X, the
algebra Dω(X) of differential operators on X twisted by ω can be defined as the algebra
generated by OX and “Lie derivatives” Lv, v ∈ Vect(X), with defining relations
f Lv = Lf v, [Lv, f ] = Lvf, [Lv, Lw] = L[v,w] + ω(v, w).
This algebra depends only on the cohomology class [ω] of ω, and equals the algebra D(X)
of usual differential operators on X if [ω] = 0.
An important special case of twisted differential operators is the algebra of differential
operators on a line bundle. Namely, let L be a line bundle on X. Since X is affine, L admits
an algebraic connection � with curvature ω, which is a closed 2-form on X. Then it is easy
to show that the algebra D(X, L) of differential operators on L is isomorphic to Dω(X).
If the variety X is smooth but not necessarily affine, then (sheaves of) algebras of twisted
differential operators are classified by the space H2(X, Ω≥1), where Ω≥1 is the two-step com-
2,cl, given by the De Rham differential acting from 1-forms to closed
plex of sheaves Ω1 | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
2,cl, given by the De Rham differential acting from 1-forms to closed
plex of sheaves Ω1
X
2-forms (sitting in degrees 1 and 2, respectively). If X is projective then this space is
isomorphic to H2,0(X, C) ⊕ H1,1(X, C). We refer the reader to [BB], Section 2, for details.
ΩX
→
X
X
Remark 7.1. One can show that Dω(X) is the universal deformation of D(X) (see [E1]).
→
7.2. Some algebraic geometry preliminaries. Let Z be a smooth hypersurface in a
smooth affine variety X. Let i : Z
X be the corresponding closed embedding. Let
N denote the normal bundle of Z in X (a line bundle). Let OX (Z) denote the module
of regular functions on X \ Z which have a pole of at most first order at Z. Then we
i∗N . Indeed, we have a natural residue
have a natural map of OX -modules φ : OX (Z)
X → i∗OZ (where Ω1
Ω1
is the module of 1-forms), hence a map
map η :
X
η� : OX (Z) → i∗OZ ⊗OX T X = i∗(T X|Z ) (where T X is the tangent bundle). The map
φ is obtained by composing η� with the natural projection T X|Z → N .
OX (Z) ⊗OX
→
We have an exact sequence of OX -modules:
0 → OX → OX ( | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
OX (Z) ⊗OX
→
We have an exact sequence of OX -modules:
0 → OX → OX (Z) → i∗N
φ
−
→
0
Thus we have a natural surjective map of OX -modules ξZ : T X → OX (Z)/OX .
7.3. The Cherednik algebra of a variety with a finite group action. We will now
generalize the definition of Ht,c(G, h) to the global case. Let X be an affine algebraic variety
over C, and G be a finite group of automorphisms of X. Let E be a G-invariant subspace
of the space of closed 2-forms on X, which projects isomorphically to H2(X, C). Consider
the algebra G � OT ∗X , where T ∗X is the cotangent bundle of X. We are going to define a
deformation Ht,c,ω(G, X) of this algebra parametrized by
(1) complex numbers t,
(2) G-invariant functions c on the (finite) set S of pairs s = (Y, g), where g ∈ G, and Y
is a connected component of the set of fixed points X g such that codimY = 1, and
(3) elements ω ∈ EG = H2(X, C)G .
49
If all the parameters are zero, this algebra will conicide with G � OT ∗X .
Let t, c = {c(Y, g)}, ω ∈ EG be variables. Let Dω/t(X)r be the algebra (over C[t, t−1, ω])
of twisted (by ω/t) differential operators on X with rational coefficients.
Definition 7.2. A Dunkl-Opdam operator for (X, G) | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
fficients.
Definition 7.2. A Dunkl-Opdam operator for (X, G) is an element of Dω/t(X)r[c] given by
the formula
(7.1)
D := tLv −
· fY (x) · (1 − g),
� 2c(Y, g)
1 − λY,g
(Y,g)∈S
where λY,g is the eigenvalue of g on the conormal bundle to Y , v ∈ Γ(X, T X) is a vector
field on X, and fY ∈ OX (Z) is an element of the coset ξY (v) ∈ OX (Z)/OX (recall that ξY
is defined in Subsection 7.2).
Definition 7.3. The algebra Ht,c,ω(X, G) is the subalgebra of G � Dω/t(X)r[c] generated
(over C[t, c, ω]) by the function algebra OX , the group G, and the Dunkl-Opdam operators.
By specializing t, c, ω to numerical values, we can define a family of algebras over C, which
we will also denote Ht,c,ω(G, X). Note that when we set t = 0, the term tLv does not become
0 but turns into the classical momentum.
Definition 7.4. Ht,c,ω(G, X) is called the Cherednik algebra of the orbifold X/G.
Remark 7.5. One has H1,0,ω(G, X) = G � Dω(X). Also, if λ �= 0 then Hλt,λc,λω(G, X) =
Ht,c,ω(G, X).
Example 7.6. X = h is a vector space and G is a subgroup in GL(h). Let v be a constant
vector field, and let fY (x) = (α | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
and G is a subgroup in GL(h). Let v be a constant
vector field, and let fY (x) = (αY , v)/αY (x), where αY ∈ h∗ is a nonzero functional vanishing
on Y . Then the operator D is just the usual Dunkl-Opdam operator Dv in the complex
reflection case (see Section 2.5). This implies that all the Dunkl-Opdam operators in the
fiDyi + a, where fi ∈ C[h], a ∈ G � C[h], and Dyi
sense of Definition 7.2 have the form
are the usual Dunkl-Opdam operators (for some basis yi of h). So the algebra Ht,c(G, h) =
Ht,c,0(G, X) is the rational Cherednik algebra for (G, h), see Section 3.1.
�
The algebra Ht,c,ω(G, X) has a filtration F • which is defined on generators by deg(OX ) =
deg(G) = 0, deg(D) = 1 for Dunkl-Opdam operators D.
Theorem 7.7 (the PBW theorem). We have
grF (Ht,c,ω(G, X)) = G � O(T ∗X)[t, c, ω].
Proof. Suppose first that X = h is a vector space and G is a subgroup in GL(h). Then, as
we mentioned, Ht,c,ω(G, h) = Ht,c(G, h) is the rational Cherednik algebra for G, h. So in this
case the theorem is true.
Now consider arbitrary X. We have a homomorphism of graded algebras
ψ : grF (Ht,c,ω(G, X)) → G � O(T ∗X)[t, c, ω]
(the principal symbol homomorphism).
The homomorphism ψ is clearly surjective, and our job | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
(the principal symbol homomorphism).
The homomorphism ψ is clearly surjective, and our job is to show that it is injective (this
is the nontrivial part of the proof). In each degree, ψ is a morphism of finitely generated
OG -modules. Therefore, to check its injectivity, it suffices to check the injectivity on the
formal neighborhood of each point z ∈ X/G.
X
Let x be a preimage of z in X, and Gx be the stabilizer of x in G. Then Gx acts on the
formal neighborhood Ux of x in X.
50
Lemma 7.8. Any action of a finite group on a formal polydisk over C is linearizable.
Proof. Let D be a formal polydisk over C. Suppose we have an action of a finite group G
on D. Then we have a group homomorphism:
→
ρ : G Aut(D) = GLn(C) � AutU (D),
where AutU (D) is the group of unipotent automorphisms of D (i.e. those whose derivative
at the origin is 1), which is a prounipotent algebraic group.
Our job is to show that the image of G under ρ can be conjugated into GLn(C). The
obstruction to this is in the cohomology group H1(G, AutU (D)), which is trivial since G is
�
finite and AutU (D) is prounipotent over C.
It follows from Lemma 7.8 that it suffices to prove the theorem in the linear case, which
�
has been accomplished already. We are done.
Remark 7.9. The following remark is meant to clarify the proof of The | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
�
has been accomplished already. We are done.
Remark 7.9. The following remark is meant to clarify the proof of Theorem 7.7. In the case
X = h, the proof of Theorem 7.7 is based, essentially, on the (fairly nontrivial) fact that the
usual Dunkl-Opdam operators Dv commute with each other. It is therefore very important
to note that in contrast with the linear case, for a general X we do not have any natural
commuting family of Dunkl-Opdam operators. Instead, the operators (7.1) satisfy a weaker
property, which is still sufficient to validate the PBW theorem. This property says that if
D1, D2, D3 are Dunkl-Opdam operators corresponding to vector fields v1, v2, v3 := [v1, v2]
and some choices of the functions fY , then [D1, D2] − D3 ∈ G � O(X) (i.e., it has no poles).
To prove this property, it is sufficient to consider the case when X is a formal polydisk, with
a linear action of G. But in this case everything follows from the commutativity of the usual
Dunkl operators Dv.
Example 7.10.
(1) Suppose G = 1. Then for t = 0,
Ht,0,ω(G, X) = Dω/t(X).
(2) Suppose G is a Weyl group and X = H the corresponding torus. Then H1,c,0(G, H)
is called the trigonometric Cherednik algebra.
7.4. Globalization. Let X be any smooth algebraic variety, and G ⊂ Aut(X). Assume
that X admits a cover by affine G-invariant open sets. Then the quotient variety X/G
exists.
For | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
by affine G-invariant open sets. Then the quotient variety X/G
exists.
For any affine open set U in X/G, let U � be the preimage of U in X. Then we can
define the algebra Ht,c,0(G, U �) as above. If U ⊂ V , we have an obvious restriction map
Ht,c,0(G, V �) Ht,c,0(G, U �). The gluing axiom is clearly satisfied. Thus the collection of
algebras Ht,c,0(G, U �) can be extended (by sheafification) to a sheaf of algebras on X/G. We
are going to denote this sheaf by Ht,c,0,G,X and call it the sheaf of Cherednik algebras on
X/G. Thus, Ht,c,0,G,X (U ) = Ht,c,0(G, U �).
→
Similarly, if ψ ∈ H2(X, Ω≥1)G , we can define the sheaf of twisted Cherednik algebras
Ht,c,ψ,G,X . This is done similarly to the case of twisted differential operators (which is the
case G = 1).
X
Remark 7.11.
(1) The construction of Ht,c,ω(G, X) and the PBW theorem extend in a
straightforward manner to the case when the ground field is not C but an algebraically
closed field k of positive characteristic, provided that the order of the group G is
relatively prime to the characteristic.
51
�
(2) The construction and main properties of the (sheaves of) Cherednik | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
relatively prime to the characteristic.
51
�
(2) The construction and main properties of the (sheaves of) Cherednik algebras of alge
braic varieties can be extended without significant changes to the case when X is a
complex analytic manifold, and G is not necessarily finite but acts properly discon
tinuously. In the following lectures, we will often work in this generalized setting.
7.5. Modified Cherednik algebra. It will be convenient for us to use a slight modification
of the sheaf Ht,c,ψ,G,X . Namely, let η be a function on the set of conjugacy classes of Y such
that (Y, g) ∈ S. We define Ht,c,η,ψ,G,X in the same way as Ht,c,ψ,G,X except that the Dunkl-
Opdam operators are defined by the formula
(7.2)
D := tLv +
�
fY (x)
(Y,g)∈S
2c(Y, g)
1 − λY,g
�
Y
(g − 1) +
fY (x)η(Y ).
The following result shows that this modification is in fact tautological. Let ψY be the class
in H2(X, Ω≥1) defined by the line bundle OX (Y )−1, whose sections are functions vanishing
on Y .
X
Proposition 7.12. One has an isomorphism
Ht,c,η,ψ,G,X → Ht,c,ψ+ P
Y η(Y )ψY ,G,X .
Proof. Let y ∈ Y and z be a function on the formal neighborhood of y such that z|Y = 0
and dzy = 0. Extend it to a system of local formal coordinates z1 = z, z2, . . . , zd near y. A
Dunkl-Opdam operator near y for the vector | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
1 = z, z2, . . . , zd near y. A
Dunkl-Opdam operator near y for the vector field ∂z can be written in the form
�
1 n−1 2c(Y, gm) m
(
z
m=1
Conjugating this operator by the formal expression zη(Y ) := (zm)η(Y )/m, we get
1 − λm (g − 1) + η(Y )).
∂
∂z
D =
+
Y,g
∂
z η(Y ) ◦ D ◦ z−η(Y ) =
∂
∂z
+
n−1
1 � 2c(Y, gm)
z
1 − λm (g m − 1)
Y,g
m=1
This implies the required statement.
�
We note that the sheaf H1,c,η,0,G,X localizes to G � DX on the complement of all the
hypersurfaces Y . This follows from the fact that the line bundle OX (Y ) is trivial on the
complement of Y .
7.6. Orbifold Hecke algebras. Let X be a connected and simply connected complex man
ifold, and G is a discrete group of automorphisms of X which acts properly discontinuously.
Then X/G is a complex orbifold. Let X � ⊂ X be the set of points with trivial stabilizer. Fix
a base point x0 ∈ X �. Then the braid group of X/G is defined to be BG = π1(X �/G, x0).
We have an exact sequence 1 K BG → G
→ →
Now let S be the set of pairs (Y, g) such that Y is a component of X g of codimension 1 in
X (such Y will be called a reflection hypersurface). For (Y, g) ∈ S, let GY be the subgroup | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
such Y will be called a reflection hypersurface). For (Y, g) ∈ S, let GY be the subgroup of
G whose elements act trivially on Y . This group is obviously cyclic; let nY = |GY |. Let CY
be the conjugacy class in BG corresponding to a small circle going counterclockwise around
the image of Y in X/G, and TY be a representative in CY .
The following theorem follows from elementary topology:
→
1.
52
�
Theorem 7.13. K is defined by relations TY
K is the intersection of all normal subgroups of BG containing TY
nY = 1, for all reflection hypersurfaces Y (i.e.,
nY ).
Proof. See, e.g., [BMR] Proposition 2.17.
�
For any conjugacy class of hypersurfaces Y such that (Y, g) ∈ S we introduce formal
parameters τ1Y , . . . , τnY Y . The entire collection of these parameters will be denoted by τ .
Let A0 = C[G].
Definition 7.14. We define the Hecke algebra of (G, X), denoted A = Hτ (G, X, x0), to be
the quotient of the group algebra of the braid group, C[BG][[τ ]], by the relations
(7.3)
nY
�
(T − e 2πji/nY e τjY ) = 0, T ∈ CY
j=1
(i.e., by the closed ideal in the formal series topology generated by these relations).
Thus, A is a deformation of A0.
It is clear that up to an isomorphism this algebra is independent on the choice of x0, so
we will sometimes drop x0 form the notation.
The main result of this section is the following theorem.
Theorem 7.15. | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
we will sometimes drop x0 form the notation.
The main result of this section is the following theorem.
Theorem 7.15. Assume that H2(X, C) = 0. Then A = Hτ (G, X) is a flat formal defor
mation of A0, which means A = A0[[τ ]] as a module over C[[τ ]].
Example 7.16. Let h be a finite dimensional vector space, and G be a complex reflection
group in GL(h). Then Hτ (G, h) is the Hecke algebra of G studied in [BMR]. It follows
from Theorem 7.15 that this Hecke algebra is flat. This proof of flatness is in fact the same
as the original proof of this result given in [BMR] (based on the Dunkl-Opdam-Cherednik
operators, and explained above).
Example 7.17. Let h be a universal covering of a maximal torus of a simply connected
simple Lie group G, Q∨ be the dual root lattice, and G� = G � Q∨ be its affine Weyl group.
Then Hτ (h, G�) is the affine Hecke algebra. This algebra is also flat by Theorem 7.15. In
fact, its flatness is a well known result from representation theory; our proof of flatness is
essentially due to Cherednik [Ch].
Example 7.18. Let G, h, Q∨ be as in the previous example, η ∈ C+ be a complex number
with a positive imaginary part, and G�� = G � (Q∨ ⊕ ηQ∨ | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
C+ be a complex number
with a positive imaginary part, and G�� = G � (Q∨ ⊕ ηQ∨) be the double affine Weyl group.
Then Hτ (h, G��) is (one of the versions of) the double affine Hecke algebra of Cherednik ([Ch]),
and it is flat by Theorem 7.15. The fact that this algebra is flat was proved by Cherednik,
Sahi, Noumi, Stokman (see [Ch],[Sa],[NoSt],[St]) using a different approach (q-deformed
Dunkl operators).
7.7. Hecke algebras attached to Fuchsian groups. Let H be a simply connected com
plex Riemann surface (i.e., Riemann sphere, Euclidean plane, or Lobachevsky plane), and Γ
be a cocompact lattice in Aut(H) (i.e., a Fuchsian group). Let Σ = H/Γ. Then Σ is a com
pact complex Riemann surface. When Γ contains elliptic elements (i.e., nontrivial elements
53
of finite order), we are going to regard Σ as an orbifold: it has special points Pi, i = 1, . . . , m
with stabilizers Zni . Then Γ is the orbifold fundamental group of Σ.1
Let g be the genus of Σ, and al, bl, l = 1, . . . , g, be the a-cycles and b-cycles of Σ. Let cj
be the counterclockwise loops around Pj . Then Γ is generated by al, bl, cj with relations
nj
cj = 1,
c1c2 · · ·
cm =
alblal
−1
b−1
.
l
�
For each j, introduce formal parameters τkj , k = 1, . . . , nj . Define the Hecke algebra Hτ (Σ)
of Σ to | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
, k = 1, . . . , nj . Define the Hecke algebra Hτ (Σ)
of Σ to be generated over C[[τ ]] by the same generators al, bl, cj with defining relations
l
nj
�
(cj − e 2πji/nj
e τkj ) = 0,
c1c2 · · ·
cm =
�
albla−1b−1 .
l
l
k=1
l
Thus Hτ (Σ) is a deformation of C[Γ].
This deformation is flat if H is a Euclidean plane or a Lobachevsky plane. Indeed, Hτ (Σ) =
Hτ (Γ, H), so the result follows from Theorem 7.15 and the fact that H2(H, C) = 0.
On the other hand, if H is the Riemann sphere (so that the condition H2(H, C) = 0 is
τ = τ (�) be a 1-parameter
violated) and Γ = 1 then this deformation is not flat. Indeed, let
subdeformation of Hτ (Σ) which is flat. Let us compute the determinant of the product
c1 · · · cm in the regular representation of this algebra (which is finite dimensional if H is the
sphere). On the one hand, it is 1, as c1 · · · cm is a product of commutators. On the other
hand, the eigenvalues of cj in this representation are e2πji/nj eτkj with multiplicity |Γ|/nj .
Computing determinants as products of eigenvalues, we get a nontrivial equation on τkj (�),
which means that the deformation Hτ is not flat.
Thus, we see that Hτ (Σ) fails to be flat in the following “forbidden” cases:
g = 0, m = | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
see that Hτ (Σ) fails to be flat in the following “forbidden” cases:
g = 0, m = 2, (n1, n2) = (n, n);
m = 3, (n1, n2, n3) = (2, 2, n), (2, 3, 3), (2, 3, 4), (2, 3, 5).
Indeed, the orbifold Euler characteristic of a closed surface Σ of genus g with m special
points x1, . . . , xm whose orders are n1, . . . , nm is
χorb(Σ, x1, . . . , xm) = 2 − 2g − m +
m
� 1
ni
i=1
,
and above solutions are the solutions of the inequality
χorb(CP 1 , x1, . . . , xm) > 0.
(note that the solutions for m = 1 and solutions (n1, n2) with n1 = n2 don’t arise, since they
don’t correspond to any orbifolds).
1Let X be a connected topological space on with a properly discontinuous action of a discrete group G.
Then the orbifold fundamental group of the orbifold X/G with base point x ∈ X, denoted πorb (X/G, x),
is the set of pairs (g, γ), where g ∈ G and γ is a homotopy class of paths leading from x to gx, with
multiplication law (g1, γ1)(g2, γ2) = (g1g2, γ), where γ is γ1 followed by g1(γ2). Obviously, in this situation
we have an exact sequence
1
1
→
π1(X, x)
→
orb (X/G, x) G
→ →
π1
1.
54 | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
1
→
π1(X, x)
→
orb (X/G, x) G
→ →
π1
1.
54
�
�
7.8. Hecke algebras of wallpaper groups and del Pezzo surfaces. The case when H
is the Euclidean plane (i.e., Γ is a wallpaper group) deserves special attention. If there are
elliptic elements, this reduces to the following configurations: g = 0 and
m = 3, (n1, n2, n3) = (3, 3, 3), (2, 4, 4), (2, 3, 6) (cases E6, E7, E8),
or
m = 4, (n1, n2, n3, n4) = (2, 2, 2, 2) (case D4).
In these cases, the algebra Hτ (Γ, H) (for numerical τ ) has Gelfand-Kirillov dimension 2,
so it can be interpreted in terms of the theory of noncommutative surfaces.
Recall that a del Pezzo surface (or a Fano surface) is a smooth projective surface, whose
anticanonical line bundle is ample. It is known that such surfaces are CP1 × CP1, or a blow
up of CP2 at up to 8 generic points. The degree of a del Pezzo surface X is by definition the
self intersection number K K of its canonical class K. For example, a del Pezzo surface of
degree 3 is a cubic surface in CP3, and the degree of CP2 with n generic points blown up is
d = 9 − n.
Now suppose τ is numerical. Let � =
n−1
j τkj . Also let n be the largest of nj , and c
j,k
be the corresponding cj . Let e ∈ C[c] ⊂ Hτ (Γ, H) be the projector to an eigenspace of c.
Consider the “spherical” sub | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
Γ, H) be the projector to an eigenspace of c.
Consider the “spherical” subalgebra Bτ (Γ, H) := eHτ (Γ, H)e.
�
·
Theorem 7.19 (Etingof, Oblomkov, Rains, [EOR]).
(i) If � = 0 then the algebra Bτ (Γ, H)
is commutative, and its spectrum is an affine del Pezzo surface. More precisely, in
the case (2, 2, 2, 2), it is a del Pezzo surface of degree 3 (a cubic surface) with a tri
angle of lines removed; in the cases (3, 3, 3), (2, 4, 4), (2, 3, 6) it is a del Pezzo surface
of degrees 3,2,1 respectively with a nodal rational curve removed.
(ii) The algebra Bτ (Γ, H) for � =�
0 is a quantization of the unique algebraic symplectic
structure on the surface from (i) with Planck’s constant �.
Proof. See [EOR].
�
Remark 7.20. In the case (2, 2, 2, 2), Hτ (Γ, Γ) is the Cherednik-Sahi algebra of rank 1; it
controls the theory of Askey-Wilson polynomials.
Example 7.21. This is a “multivariate” version of the Hecke algebras attached to Fuchsian
groups, defined in the previous subsection. Namely, letting Γ, H be as in the previous
subsection, and N ≥ 1, we consider the manifold X = H N with the action of ΓN = SN �ΓN .
If H is a Euclidean or Lob | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
H N with the action of ΓN = SN �ΓN .
If H is a Euclidean or Lobachevsky plane, then by Theorem 7.15 Hτ (ΓN , X N ) is a flat
deformation of the group algebra C[ΓN ]. If N > 1, this algebra has one more essential
parameter than for N = 1 (corresponding to reflections in SN ). In the Euclidean case, one
expects that an appropriate “spherical” subalgebra of this algebra is a quantization of the
Hilbert scheme of a del Pezzo surface.
7.9. The Knizhnik-Zamolodchikov functor. In this subsection we will define a global
analog of the KZ functor defined in [GGOR]. This functor will be used as a tool of proof of
Theorem 7.15.
Let X be a simply connected complex manifold, and G a discrete group of holomorphic
transformations of X acting on X properly discontinuously. Let X � ⊂ X be the set of points
with trivial stabilizer. Then we can define the sheaf of Cherednik algebras H1,c,η,0,G,X on
X/G. Note that the restriction of this sheaf to X �/G is the same as the restriction of the
55
sheaf G � DX to X �/G (i.e. on X �/G, the dependence of the sheaf on the parameters c and
η disappears). This follows from the fact that the line bundles OX (Y ) become trivial when
restricted to X �.
Now let M be a module over H1,c,η,0,G,X which is a locally free | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
let M be a module over H1,c,η,0,G,X which is a locally free coherent sheaf when
restricted to X �/G. Then the restriction of M to X �/G is a G-equivariant D-module on
X � which is coherent and locally free as an O-module. Thus, M corresponds to a locally
constant sheaf (local system) on X �/G, which gives rise to a monodromy representation of
the braid group π1(X �/G, x0) (where x0 is a base point). This representation will be denoted
by KZ(M ). This defines a functor KZ, which is analogous to the one in [GGOR].
It follows from the theory of D-modules that any OX/G-coherent H1,c,η,0,G,X -module is
locally free when restricted to X �/G. Thus the KZ functor acts from the abelian category
Cc,η of OX/G-coherent H1,c,η,0,G,X -modules to the category of representations of π1(X �/G, x0).
It is easy to see that this functor is exact.
For any Y , let gY be the generator of GY which has eigenvalue e2πi/nY in the conormal
τ (c, η) be the invertible linear transformation defined by the
→
bundle to Y . Let (c, η)
formula
n
τjY = 2πi(2
�Y −1
c(Y, gY
m)
m=1
1 − e2πjmi/nY
1 − e−2πmi/nY
− η(Y ))/nY .
Proposition 7.22. The functor KZ maps the category Cc,η | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
Y
− η(Y ))/nY .
Proposition 7.22. The functor KZ maps the category Cc,η to the category of representations
of the algebra Hτ (c,η)(G, X).
Proof. The result follows from the corresponding result in the linear case (which we have
already proved) by restricting M to the union of G-translates of a neighborhood of a generic
�
point y ∈ Y , and then linearizing the action of GY on this neighborhood.
7.10. Proof of Theorem 7.15. Consider the module M = IndG�DX OX . Then KZ(M )
is the regular representation of G which is denoted by regG. We want to show that M
deforms uniquely (up to an isomorphism) to a module over H1,c,0,η,G,X for formal c, η. The
obstruction to this deformation is in Ext2
(M, M ) and the freedom of this deformation
is in Ext1
(M, M ). Since
G�DX
DX
G�DX
Exti
G�DX
(M, M ) = Exti
DX
(OX , ResM ) = Exti
(OX , OX ⊗ CG)
DX
and X is simply connected, we have
= Exti
DX
(OX , OX ) ⊗ CG = Hi(X, C) ⊗ CG,
Ext1
G�DX
(M, M ) = 0, and Ext2
G�DX
(M, M ) = 0 if H2(X, C) = 0.
Thus such deformation exists and is unique if H2(X, C) = 0.
Now let Mc,η be the deformation. Then KZ(Mc,η) is a Hτ (c,η)(G, X)-module from Propo
sition 7.22 and it deforms flatly the module regG. | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
o
sition 7.22 and it deforms flatly the module regG. This implies Hτ (c,η)(G, X) is flat over
C[[τ ]].
Remark 7.23. When X is not simply connected, the theorem is still true under the as
sumption π2(X) ⊗ C = 0 (i.e. H2( �
X� is the universal cover of X), and the
proof is contained in [E1].
X, C) = 0, where
56
7.11. Example: the simplest case of double affine Hecke algebras. Now let G =
Z2 � Z2 acting on C. Then the conjugacy classes of reflection hyperplanes are four points:
0, 1/2, 1/2 + η/2, η/2, where we suppose the lattice in C is Z ⊕ Zη. Correspondingly, the
presentation of G is as follows:
generators: T1, T2, T3, T4;
relations: T1T2T3T4 = 1, Ti
2 = 1.
Thus, the corresponding orbifold Hecke algebra is the following deformation of CG:
generators: T1, T2, T3, T4;
relations: T1T2T3T4 = 1, (Ti − pi)(Ti − qi) = 0,
where pi, qi (i = 1, . . . , 4), are parameters.
If we renormalize the Ti, these relations turn into
(Ti − ti)(Ti + t−
i
1) = 0, T1T2T3T4 = q,
and we get the type C ∨C1 double affine Hecke algebra. If we set three of the four Ti’s | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
1 double affine Hecke algebra. If we set three of the four Ti’s
2 = 1, we get the double affine Hecke algebra of type
satisfying the undeformed relation Ti
A1. More precisely, this algebra is generated by T1, . . . , T4 with relations
T2
2 = T3
2 = T4
2 = 1,
(T1 − t)(T1 + t−1) = 0, T1T2T3T4 = q.
Another presentation of this algebra (which is more widely used) is as follows. Let E =
C/Z2, an elliptic curve with an Z2 action defined by z �→ −z. Define the partial braid group
B = πorb(E\{0}/Z2, x),
where x is a generic point. Notice that comparing to the usual braid group, we do not
delete three of the four reflection points. The generators of the group π1(E \ {0}, x) (the
fundamental group of a punctured 2-torus) are X (corresponding to the a-cycle on the torus),
Y (corresponding to the b-cycle on the torus) and C (corresponding to the loop around 0). In
order to construct B, which is an extension of Z2 by π1(E \ {0}, x), we introduce an element
T s.t. T 2 = C (the half-loop around the puncture). Then X, Y, T satisfy the following
relations:
1
T XT = X −1 , T −1Y T −1 = Y −1 ,
Y −1X −1Y XT 2 = 1.
The Hecke algebra of the partial braid group is then defined to be the group algebra of B | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
2 = 1.
The Hecke algebra of the partial braid group is then defined to be the group algebra of B
plus an extra relation: (T − q1)(T + q2) = 0.
A common way to present this Hecke algebra is to renormalize the generators so that one
has the following relations:
T XT = X −1, T −1Y T −1 = Y −1, Y −1X −1Y XT 2 = q, (T − t)(T + t−1) = 0.
This is Cherednik’s definition for HH(q, t), the double affine Hecke algebra of type A1 which
depends on two parameters q, t.
There are two degenerations of the algebra HH(q, t).
1. The trigonometric degeneration.
Set Y = e�y, q = e� , t = e�c and T = se�cs, where s ∈ Z2 is the reflection. Then s, X, y
satisfy the following relations modulo �:
s 2 = 1, sXs−1 = X −1 , sy + ys = 2c, X −1yX − y = 1 − 2cs.
The algebra generated by s, X, y with these relations is called the type A1 trigonomet
ric Cherednik algebra. It is easy to show that it is isomorphic to the Cherednik algebra
H1,c(Z2, C∗), where Z2 acts on C∗ by z
z−1 .→
57
2. The rational degeneration.
In the trigonometric Cherednik algebra, set X = e�x and y = y/�.
ˆ
following relations modulo �:
Then s, x, ˆ
y satisfy the
s 2 = 1, sx = −xs, sˆ
y = −ˆ yx − x� | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/02ec876b48928aad62c32c782ce96699_MIT18_735F09_ch07.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.