text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
MIT OpenCourseWare
http://ocw.mit.edu
8.821 String Theory
Fall 2008
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
8.821 F2008 Lecture 10: CFTs in D > 2
Lecturer: McGreevy
October 11, 2008
In today’s lecture we’ll elaborate on what conformal invariance really means for a QFT. Remember,
we’ll keep working with CFTs in greater than 2 spacetime dimensions.
A very good reference is Conformal Field Theory by Di Francesco et al, p95-107.
1
Conformal Transformations
To begin, we’ll give a few different perspectives on what conformal invariance is and what conformal
transformations are. Our first perspective was emphasized by Brian Swingle after last lecture.
Recall from last lecture that the group structure of the conformal group is the set of coordinate
transformations:
′
x → x
) = Ω(x)gµν (x)
′
gµν (x ′
(1)
However, we should point out that coordinate transformations don’t do anything at all in the sense
that the metric is invariant, ds ′2 = ds2, and all we did was relabel points. To say something useful
about physics, we really want to compare a theory at different distances. For example, we would
like to compare the correlators of physical observables separated at different distances. Conformal
symmetry relates these kinds of observables!
To implement a conformal transformation, we need to change the metric via a Weyl rescaling | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
implement a conformal transformation, we need to change the metric via a Weyl rescaling
x → x
gµν → Ω(x)gµν
(2)
which changes physical distances ds2 → Ω(x)ds2 . Now follow this by a coordinate transformation
′ ). This takes us to a coordinate
such as the one in Eq.(1), so that x → x ′ and Ω(x)gµν → g ′
system where the metric has the same form as the one we started with, but the points have all been
moved around and pushed closer together or farther apart. Therefore, we can view the conformal
group as those coordinate transformations which can ‘undo’ a Weyl rescaling.
µν (x
1
1.1 Scale transformations
Another useful perspective on conformal invariance, particularly for AdS/CFT, is to think about
scale transformations.
Any local (relativistic) QFT has a stress-energy tensor T µν which is conserved if the theory is
translation invariant. T µν encodes the linear response of the action to a small change in the metric:
If we put in gravity, then T µν = 0 by the equations of motion. But let’s return to QFT without
gravity. Consider making a scale transformation, which changes the metric
�
δS =
T µν δgµν
(3)
δgµν = 2λgµν →
δS =
µ2λ
Tµ
(4)
where λ is a constant. Therefore we see that if Tµ
µ = 0 then the theory is scale invariant. | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
Therefore we see that if Tµ
µ = 0 then the theory is scale invariant.
�
In fact, since nothing we said above depended on the fact that λ is a constant, and since our theory
is a local theory, we can actually make the following transformation:
We conclude from the two equations above that, at least classically,
�
δgµν = δΩ(x)gµν →
δS =
µδΩ(x)
Tµ
(5)
• If Tµ
µ = 0 the theory has both scale invariance and conformal invariance.
• If the theory is conformally invariant, it is also scale invariant, and Tµ
µ = 0
However scale invariance does not quite imply conformal invariance. For more details please see
Polchinski’s paper, Scale and conformal invariance in quantum field theory.
The conserved currents and charges of the transformations above are:
Sµ = x ν Tµν
λ
Cµν = (2xµxλ − x 2 gµλ)Tν
µ .
since both ∂µSµ and ∂µCµν are proportional to Tµ
→ D ≡
�
→ Cµ ≡
S0dd x
C0µdd x
�
(6)
(7)
1.2 Geometric Interpretation
Lastly we’d like to give an alternative geometric interpretation of the conformal group. The con
formal group in Rd,1 is isomorphic to SO(d + 1, 2), the Lorentz group of a space with two extra
dimensions. Suppose | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
1, 2), the Lorentz group of a space with two extra
dimensions. Suppose this space Rd+1,2 has metric
ηab = diag(− + +... + −)ab
(8)
2
where the last two dimensions are the ‘extra’ ones. A light ray in this space can be parameterized
by d + 1 dimensional coordinates xµ in the following way:
ζ a = κ(xµ, (1 − x 2), (1 + x 2))
1
2
1
2
(9)
where κ is some arbitrary constant. The group SO(d + 1, 2) moves these light rays around. We
can interpet these transformations as maps on xµ, and in fact these transformations are precisely
the conformal transformations, as you can check on your own.
Invariants in Rd+1,2 should also be conformal invariants, for example
ζ1 · ζ2 = ηabζ1
aζ1
b =
1
2
κ1κ2(x1 − x2)2 .
(10)
This statement is not quite true because ζ a and λζ a are identified with the same xµ, so κ is
a redundant variable. Conformal invariants actually are cross ratios of invariants in Rd+1,2, for
example
ζ1 · ζ2ζ3 · ζ4 .
ζ1 · ζ3ζ2 · ζ4
(11)
This perspective was lifted from a paper by Callan. It may be useful if you are | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
perspective was lifted from a paper by Callan. It may be useful if you are in the business of
relating CFTs to theories in higher dimensional spacetimes, but you are otherwise free to ignore it.
2
Constraints on correlators in a CFT
We will now attempt to say something concrete about constraints on a QFT from conformal in
variance, namely constraints on correlators. Consider a Green’s function
where φi is a conformal primary. Recall from last lecture that this means φ satisfies [ ˆ
−iΔφ(0), where Δ is the weight and Dˆ is the dilatation operator defined above.
D, φ(0)]
=
G = �φ1(x1)...φn(xn)�
(12)
How does φ(x) transform under Dˆ ? Writing the field as φ(x) = e−iPˆ·xφ(0)e+iPˆ·x, and using the
following result
−iPˆ·x ˆ
e
De+i ˆ
P ·x
ˆ
= D + x · P
ˆ
(13)
which follows from the algebra, then [D, φ(x)] = −i(Δ + x · ∂)φ(x).
The Green’s function thus transforms under the action of Dˆ by:
n
δG =
µ
(xi
∂
µ
∂x
i
i=1
�
+ Δi)�
φi(xi)� + �0|Dˆ
φi(xi)|0� + �0|
φi(xi)Dˆ |0�.
(14)
�
�
�
Assuming that the vacuum is conformally invariant, | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
� |0�.
(14)
�
�
�
Assuming that the vacuum is conformally invariant, the last two terms above vanish. We will
assume this in the rest of the discussion. N = 4 is a good example where there is a conformally
invariant vacuum.
3
Now let’s ignore what we just did with infinitesimal transformations and look at the finite trans
formations of the green’s function:
G ′
=
n
�
i=1
�
∂x ′ Δi/d
∂x
′
φi(xi) ≡
�
x=xi
�
�
�
�
�
�
�
�
λ−2
(1 + 2b · xi + b2xi
2)2
�
n
�
i=1
�
−Δi/2
Ω
i
n
�
i=1
�
�
′
φi(xi)�
scale transformation
special conformal
(15)
where
Ωi =
Conformal invariance tells us exactly how the Green’s function must transform. Therefore, up to
the Ωi factors, the Green’s function can only depend on conformal invariants I made from xµ
i . What
are the conformal invariants? Assuming φ’s are scalars, let’s constrain the conformal invariants by
using the symmetries of the theory:
• Translation invariance implies I depends only on differences: xi − xj. Therefore there are
d(n − 1) numbers I can depend on.
• Rotational invariance implies I depends only on magnitudes of differences: rij = |xi − xj|.
There are | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
only on magnitudes of differences: rij = |xi − xj|.
There are n(n − 1)/2 of these numbers.
• Scale invariance implies I depends only on ratios of magnitudes rij/rkl
• Finally special conformal transformations transform rij by
′
(r12)2 =
(1 + 2b · x1 + b2x2
2
r12
1)(1 + 2b · x2 + b2x2
2)
=
2
r12
/2 .
/2Ω1
Ω1
2
1
(16)
This result implies that only cross ratios are invariant: rij rkl .
rikrjl
There are n(n − 3)/2 of these cross ratios, so only n(n − 3)/2 conformal invariants.1 Note that
this implies there are no conformal invariants for n < 4, namely the 2-pt and 3-pt function. This
is obvious from the fact that you cannot form cross ratios with only 2 or 3 spatial vectors.
2.1 Two-point function
In fact let’s look at the 2-pt function for scalar primaries. Its spatial dependence must be completely
determined by the logic above:
�φ1(x1)φ2(x2)� = c12f (|r12|) = c12 Δ1+Δ2
1
r12
(17)
1Ginsparg proves why there are n(n − 3)/2 if you are curious.
4
where the first equality follows from translation plus rotation invariance, and the second equality
follows from scale transformations on both sides. Finally let’s apply | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
and the second equality
follows from scale transformations on both sides. Finally let’s apply a special conformal transfor
mation to both sides:
−Δ1/2 −Δ2/2
Ω2
Ω1
′
′
�φ1(x1)φ2(x2)� = c12
= c12
Δ1+Δ2
�
�
1
1/4 1/4
′ r12Ω1 Ω2
Ω1
1
(r ′)Δ1+Δ2
12
′
′
−(Δ1+Δ2)/4 −(Δ1+Δ2)/4
Ω2
(18)
However using Eq.(17), we already know what �φ1(x1)φ2(x2)� is. From this we conclude either
• Δ1 = Δ2, OR c12 = 0
The special conformal transformation was necessary for this result.
As an aside, there is an alternate shorter argument by Cardy which gives the same result without
using the annoying formula for the special conformal transformation, though we still have to ‘do’
one. First we note that the two-point function �φ1(x1)φ2(x2)� depends only on |x1 − x2|, so we
can perform a 180 degree rotation to obtain �φ2(x1)φ1(x2)�, which must give the same answer.
Performing a conformal transformation to the two Green’s functions above gives
−Δ1/2 −Δ2/2
Ω2
Ω1
�φ1(x1)φ2(x2)� = Ω2
−Δ1/2 −� | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
Ω1
�φ1(x1)φ2(x2)� = Ω2
−Δ1/2 −Δ2/2
Ω1
�φ2(x1)φ1(x2)�
So either G12 = 0 or Δ1 = Δ2.
Some final comments on the two point function: for a set of fields φi with Δi = Δj = Δ
�φi(x1)φj (x2)� =
dij
2Δ
r12
(19)
(20)
where dij must be symmetric by the argument of Cardy. If we assume the operators are hermitian
then dij is also real, and then we can diagonalize dij . In a unitary theory, the diagonal elements of
dij are norms of states, and positive. By rescaling φi, we can arrive at a basis where dij = δij .
2.2 Higher point functions
There are no conformal invariants for the three-point function either. Using translation and rotation
invariance, we can write the three-point function as
G123 = �φ1(x1)φ2(x2)φ3(x3)� =
Cabc
b
a
r12r23r31
c .
a,b,c
�
(21)
Scale invariance then implies all terms on the RHS transform in the same way as the LHS, so
a + b + c = Δ1 + Δ2 + Δ3. You can then check that special conformal transformations give the
additional constraint that
a = +Δ1 + Δ2 − Δ3
b = −Δ1 + Δ2 + Δ3
c = | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
= +Δ1 + Δ2 − Δ3
b = −Δ1 + Δ2 + Δ3
c = +Δ1 − Δ2 + Δ3
5
(22)
or else Cabc = 0. The permutation symmetry of rij also implies Cabc is symmetric. We can’t say
anything more.
For higher point functions, we can say even less, since there exist conformal invariants formed by
the cross ratios, and any correlator contains an arbitrary function of the conformal invariants which
cannot be determined by the conformal symmetry. For example, for n = 4, there are 4 × 1/2 = 2
invariants:
G(x1, x2, x3, x4) = F
r12r34 r12r34
,
r13r24 r23r14
1
Δi−Δj −Δ/3
where Δ =
�
Δi and F is undetermined function of the two invariants.
�
i<j rij
�
(23)
�
The results in this section are true for D > 1, and actually for D = 2 there are extra symmetries
which allow us to learn something about these undetermined functions of invariants. They’re often
hypergeometric. Also, in principle, there exist similar expressions for non-scalar fields, though
John McGreevy has never caught a glimpse of one. He has glimpsed some spectacular things about
Wilson loops in this context, though.
3
Conformal Compactifications
Let’s take a short detour to conformal compactifications. We’ll find that understanding this will be
quite useful for understanding the state-operator | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
fications. We’ll find that understanding this will be
quite useful for understanding the state-operator correspondence, and later for understanding the
geometry of AdS. The general goal of such coordinate transformations is to map infinite coordinates
to finite ones, thus allowing us to understanding the structure of the boundary of spacetime at ∞.
In particular we’d like to find coordinates in which the metric is Ω(x)gµν where the coordinates
have a finite range because we can take care of the Ω(x) with a Weyl transformation. Furthermore
the presence of the factor of Ω(x) does not affect the casual structure of spacetime- spacelike
vectors remain spacelike, and the same goes for lightlike and timelike.2 All this talk makes Penrose
diagrams sound way more informative than they may seem.
Start with the simple example of Euclidean space Rp+1 with metric ds2 = dr2 + r2dΩ2
is the metric on the p-sphere. Define r = tan θ/2. Then the metric is
p where dΩ2
p
ds2(dθ2 + sin2 θdΩ2
p)
1
2 θ/2
4 cos
(24)
1
which is the metric on the p + 1-sphere with an extra conformal factor 4 cos
2 θ/2 . This extra factor
blows up at θ = π or r → ∞ but we don’t care! By making this coordinate change, we turned r → ∞
to a fi | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
→ ∞ but we don’t care! By making this coordinate change, we turned r → ∞
to a finite point, thus adding a point to Rp+1 to get Sp+1 . It was a one-point compactification.
Moving onto Minkowski Rp,1 with ds2 = −dt2 + dr2 + r2dΩp
2
t ± r = tan
the metric becomes
τ ±θ
2
−1, by making the coordinate change
�
�
ds2 = (−dτ 2 + dθ2 + sin2 θdΩ2
p−1)
1
4 cos2 τ +θ cos2 τ −θ
2
2
(25)
2
There may be some trouble at the conformal boundary though?
6
Figure 1: Penrose diagram of Minkowski space, with angular coordinates on Sp−1 suppressed. The
size of the Sp−1 shrinks at sin θ = 0. The analytical continuation of the finite τ coordinate is
indicated by the dashed lines, and gives the Einstein static universe.
which looks like R × Sp with a conformal factor which blows up at the conformal boundary. τ has
range [−π, π] and θ has range [0, π]. The Penrose diagram is shown in Fig(1).
The reason we went though all this trouble is that a different subgroup of the action of the conformal
group is actually obvious in these coordinates. Recall the conformal group of Rp,1 is SO(p + 1, 2).
There are two interesting decompositions (subgroups) of the conformal group | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
1 is SO(p + 1, 2).
There are two interesting decompositions (subgroups) of the conformal group:
• SO(p, 1) × SO(1, 1) where SO(p, 1) is the Lorentz group and SO(1, 1) corresponds to dilata
tions Dˆ (don’t worry about why). If we think about this decomposition in terms of the higher
dimension space Rp+1,2, the two subgroups act on different directions:
(− + ......+ +− )
�
The Hamililtonian is just the usual generator of time translations and has a continuous
spectrum because the field theory lives on Rp, and we’ve assumed it’s a CFT.
��
�
SO(p, 1) × SO(1, 1)
����
(26)
• SO(p+1)×SO(2) is obvious in the conformal coordinates we discussed above. Here SO(p+1)
rotates Sp and SO(2) corresponds to τ translations. In Rp+1,2 , SO(p + 1) rotates the spatial
coordinates and SO(2) rotates the time-like coordinates.
SO(2)
(− +......+ −)
�
�
SO(p + 1)
�
��
��
�
7
(27)
Figure 2: The state operator correpondence. The cylinder on the left represents Sp. Start with an
initial state |ψ� and evolve in τ with Hτ . The picture on the right is the space in r coordinates
where the red dashed lines are constant τ slices which are also constant r slices.
The τ -translations | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
dashed lines are constant τ slices which are also constant r slices.
The τ -translations are generated by a different operator than the one above. Let’s call it
Hglobal since τ is called the global time. In fact, it’s actually equal to
Hglobal = (C0 + P0) = J0,p+2
1
2
(28)
where C0 is 0 component of special conformal generator, and J0,p+2 is the generator in the
space Rp+1,2 . This time spatial sections of this theory are Sp which implies Hglobal has a
discrete spectrum!
One lesson from this is that it’s difficult in a CFT to agree on what the Hamiltonian is, because we
have a bunch of charges to pick from. The physics looks different in different coordinates though
they are of course related by conformal transformations.
4 State-Operator Correspondence
Consider a QFT on R × Sp, which means time τ × a compact space to avoid some IR issues, and so
there is a ground state separated by a gap from the first excited state. The metric is ds2 = dτ 2+dΩ2
p.
Make a (large) change of coordinates r = eτ , then the metric is ds2 = (dr2 + r2dΩ2
p)/r2 which is
flat space with a conformal factor. 3
The statement of the state-operator correspondence is that given an intial state on the cylinder
(Sp) |ψ�, | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
correspondence is that given an intial state on the cylinder
(Sp) |ψ�, we can make a conformal transformation that squashes it all to the origin on the plane
(Rp+1). Thus it is a local perturbation on the plane, and there is a corresponding local operator
Oψ(0) at the origin. We can also go from local operators on the plane back to initial states on the
cylinder:
|O� = lim O(x)|0�
x→0
(29)
Therefore in a CFT there is a correspondence between local operators and states. See Fig(2).
3OK, we cheated for the sign of dτ 2 in ds2 but it just means using a slightly different coordinate transformation.
8 | https://ocw.mit.edu/courses/8-821-string-theory-fall-2008/0d1bda106dfd2af510daf14b261c56fb_lecture10.pdf |
6.852: Distributed Algorithms
Fall, 2009
Class 5
Today’s plan
• Review EIG algorithm for Byzantine agreement.
• Number-of-processors lower bound for Byzantine
agreement.
• Connectivity bounds.
• Weak Byzantine agreement.
• Time lower bounds for stopping agreement and
Byzantine agreement.
• Reading: Sections 6.3-6.7, [Aguilera, Toueg],
[Keidar-Rajsbaum]
• Next:
– Other distributed agreement problems
– Reading: Chapter 7 (but skim 7.2)
Byzantine agreement
• Recall correctness conditions:
– Agreement: No two nonfaulty processes decide on
different values.
– Validity: If all nonfaulty processes start with the same v,
then v is the only allowable decision for nonfaulty
processes.
– Termination: All nonfaulty processes eventually decide.
• Presented EIG algorithm for Byzantine agreement,
using:
– Exponential communication (in f)
– f+1 rounds
– n > 3f
EIG algorithm for Byzantine
agreement
• Use EIG tree.
• Relay messages for f+1 rounds.
• Decorate the EIG tree with values from V, replacing any
garbage messages with default value v0.
• Call the decorations val(x), where x is any node label.
• Decision rule:
– Redecorate the tree bottom-up, defining newval(x).
• Leaf: newval(x) = val(x)
• Non-leaf: newval(x) =
– newval of strict majority of children in the tree, if majority exists,
– v0 otherwise.
– Final decision: newval(λ) (newval at root)
Example: n = 4, f = 1
λ
• T4,1:
• Consider a possible
execution in which p3 is
faulty.
Initial values 1 1 0 0
•
• Round 1
• Round 2
1
2
3
4
12
13
14
21
23
24
31
32 34
41
42
43
Lies
1
1
0
0
1
1
0
0
1
1
1
0
1
1
1
0
1101
0
1
0
00011 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
0
0
1
1
1
0
1
1
1
0
1101
0
1
0
00011
1101
1
1
0
00011
1111
1
1
0
10011
Process 1
Process 2
(Process 3)
Process 4
Example: n = 4, f = 1
• Now calculate newvals, bottom-up, choosing majority
values, v0 = 0 if no majority.
Corrected by taking majority
1
1
1
1
1
1
0
1
1
1
0
1
1
1
0
1
1
0
0
1
1
0
0
1
1
1
0
1
1
1
0
1101
0
1
0
00011
1101
1
1
0
00011
1111
1
1
0
10011
Process 1
Process 2
(Process 3)
Process 4
Correctness proof
• Lemma 1: If x ends with a nonfaulty process index then
val(x)i = val(x)j for every nonfaulty i and j.
In example, such nodes are:
•
λ
1
2
3
4
12
13
14
21
23
24
31
32 34
41
42
43
• Lemma 2: If x ends with a nonfaulty process index then ∃v
such that val(x)i = newval(x)i = v for every nonfaulty i.
• Proof: Induction on level in the tree, bottom up.
Main correctness conditions
• Validity:
– Uses Lemma 2.
• Termination:
– Obvious.
• Agreement:
Agreement
λ
• Path covering: Subset
of nodes containing at
least one node on each
path from root to leaf:
1
2
3
4
12
13
14
21
23
24
31
32 34
41
42
43
• Common node: One for which all nonfaulty processes
have the same newval.
– All nodes whose labels end in nonfaulty process index
are common.
Agreement
• Lemma 3: There exists a path covering all of whose
nodes are common.
• Proof: | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
process index
are common.
Agreement
• Lemma 3: There exists a path covering all of whose
nodes are common.
• Proof:
– Let C = nodes with labels of the form xj, j nonfaulty.
• Lemma 4: If there’s a
common path covering
of the subtree rooted at
any node x, then x is
common
• Lemma 5: The root is
common.
• Yields Agreement.
λ
1
2
3
4
12
13
14
21
23
24
31
32 34
41
42
43
Complexity bounds
• As for EIG for stopping agreement:
– Time: f+1
– Communication: O(nf+1)
• But now, also requires n > 3f processors.
• Q: Is n > 3f necessary?
Lower bound on the number of
processes for Byzantine
Agreement
Number of processors for
Byzantine agreement
• n > 3f is necessary!
– Holds for any n-node (undirected) graph.
– For graphs with low connectivity, may need even more
processors.
– Number of failures that can be tolerated for Byzantine
agreement in an undirected graph G has been
completely characterized, in terms of number of nodes
and connectivity.
• Theorem 1: 3 processes cannot solve Byzantine
Agreement with 1 possible failure.
Proof (3 vs. 1 BA)
• By contradiction. Suppose algorithm A,
consisting of processes 1, 2, 3, solves
BA with 1 possible failure.
• Construct new system S from 2 copies
of A, with initial values as follows:
• What is S?
– A synchronous system of some kind.
– Not required to satisfy any particular
correctness conditions.
3′
1
1
A
3
2
1
2
0
1
S
0
1
0
3
– Not necessarily a correct BA algorithm for
2′
1′
the 6-node ring.
– Just some synchronous system, which runs
and does something.
– We’ll use it to get our contradiction.
Proof (3 vs 1 BA)
• Consider 2 and 3 in S:
• Looks to them like:
– They’re in A, with a faulty
process 1.
– | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
• Consider 2 and 3 in S:
• Looks to them like:
– They’re in A, with a faulty
process 1.
– 1 emulates 1′-2′-3′-1 from S.
• In A, 2 and 3 must decide 0
• So by indistinguishability,
they decide 0 in S also.
0
2
0
0
3
1
0
1
S
0
1
3′
1
2′
1′
1
A
0
0
2
0
3
0
Proof (3 vs 1 BA)
• Now consider 1′ and 2′ in S.
• Looks to them like:
– They’re in A with a faulty
process 3.
– 3 emulates 3′-1-2-3 from S.
• They must decide 1 in A, so
they decide 1 in S also.
0
2
0
0
3
1
0
1
S
0
1
3′
1
2′
1′
1
1
1
1
A1
1
1
2
3
Proof (3 vs 1 BA)
• Finally, consider 3 and 1′ in S:
• Looks to them like:
– They’re in A, with a faulty process 2.
– 2 emulates 2′-3′-1-2 from S.
In A, 3 and 1 must agree.
•
• So by indistinguishability, 3 and
1′ agree in S also.
• But we already know that
process 1′ decides 1 and
process 3 decides 0, in S.
• Contradiction!
0
2
0
0
3
1
0
1
S
0
1
3′
1
2′
1′
1
1
1
1
A
0
3
2
Discussion
• We get this contradiction even if the original
algorithm A is assumed to “know n”.
• That simply means that:
– The processes in A have the number 3 hard-wired into
their state.
– Their correctness properties are required to hold only
when they are actually configured into a triangle.
• We are allowed to use these processes in a
different configuration S---as long as we don | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
hold only
when they are actually configured into a triangle.
• We are allowed to use these processes in a
different configuration S---as long as we don’t
claim any particular correctness properties for S.
Impossibility for n = 3f
• Theorem 2: n processes can’t solve BA, if n ≤ 3f.
• Proof:
– Similar construction, with f processes treated as a group.
– Or, can use a reduction:
• Show how to transform a solution for n ≤ 3f to a solution for 3 vs. 1.
• Since 3 vs. 1 is impossible, we get a contradiction.
• Consider n = 2 as a special case:
0
1
1
2
– n = 2, f = 1
– Each could be faulty, requiring the other to decide on its own value.
– Or both nonfaulty, which requires agreement, contradiction.
• So from now on, assume 3 ≤ n ≤ 3f.
• Assume a Byzantine Agreement algorithm A for (n,f).
• Transform it into a BA algorithm B for (3,1).
Transforming A to B
• Algorithm:
– Partition A-processes into groups I1, I2, I3, where 1 ≤ |I1|, |I2|, |I3| ≤ f.
– Each Bi process simulates the entire Ii group.
B1
– Bi initializes all processes in Ii with Bi’s initial value.
– At each round, Bi simulates sending messages:
• Local: Just simulate locally.
• Remote: Package and send.
B2
– If any simulated process decides, Bi decides the same (use any).
• Show B satisfies correctness conditions:
– Consider any execution of B with at most 1 fault.
– Simulates an execution of A with at most f faults.
– Correctness conditions must hold in the simulated execution of A.
– Show these all carry over to B’s execution.
B3
B’s correctness
• Termination:
– If Bi is nonfaulty in B, then it simulates only nonfaulty processes of
A (at least one).
– Those terminate, so Bi does also.
• Agreement:
– If Bi, Bj are nonfaulty processes of B, they simulate only nonfaulty
processes of A.
– Agreement in A implies all these agree.
– So Bi, Bj agree.
• | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
they simulate only nonfaulty
processes of A.
– Agreement in A implies all these agree.
– So Bi, Bj agree.
• Validity:
– If all nonfaulty processes of B start with v, then so do all nonfaulty
processes of A.
– Then validity of A implies that all nonfaulty A processes decide v,
so the same holds for B.
General graphs and connectivity
bounds
• n > 3f isn’t the whole story:
– 4 processes, can’t tolerate 1 fault:
• Theorem 3: BA is solvable in an n-node graph G,
tolerating f faults, if and only if both of the following hold:
– n > 3f, and
– conn(G) > 2f.
• conn(g) = minimum number of nodes whose removal
results in either a disconnected graph or a 1-node graph.
• Examples:
conn = 1
conn = 3
conn = 3
Proof: “If” direction
• Theorem 3: BA is solvable in an n-node graph G,
tolerating f faults, if and only if n > 3f and conn(G) > 2f.
• Proof (“if”):
– Suppose both hold.
– Then we can simulate a total-connectivity algorithm.
– Key is to emulate reliable communication from any node i to any
other node j.
– Rely on Menger’s Theorem, which says that a graph is c-connected
(that is, has conn ≥ c) if and only if each pair of nodes is connected
by ≥ c node-disjoint paths.
– Since conn(G) ≥ 2f + 1, we have ≥ 2f + 1 node-disjoint paths
between i and j.
– To send message, send on all these paths (assumes graph is
known).
– Majority must be correct, so take majority message.
Proof: “Only if” direction
• Theorem 3: BA is solvable in an n-node graph G,
tolerating f faults, if and only if n > 3f and conn(G) > 2f.
• Proof (“only if”):
– We already showed n > 3f; remains to show conn(G) > 2f.
– Show key idea with simple case, conn = 2, f = 1.
– | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
; remains to show conn(G) > 2f.
– Show key idea with simple case, conn = 2, f = 1.
– Canonical example:
• Disconnect 1 and 3 by removing 2 and 4:
– Proof by contradiction.
– Assume some algorithm A that solves BA in this
canonical graph, tolerating 1 failure.
1
A
4
2
333
Proof (conn = 2, 1 failure)
• Now construct S from two
copies of A.
• Consider 1, 2, and 3 in S:
– Looks to them like they’re in A,
with a faulty process 4.
– In A, 1, 2, and 3 must decide 0
– So they decide 0 in S also.
• Similarly, 1′, 2′, and 3′ decide
1 in S.
4
1
0
A
0
333
2
0
0
1
0
4′
1
1
3′
1
1
0
2
0
S
1
2′
0
3
0
0
4
1
1
1′
Proof (conn = 2, 1 failure)
• Finally, consider 3′, 4′, and 1 in S:
– Looks to them like they’re in A, with a
faulty process 2.
– In A, they must agree, so they also
agree in S.
– But 3′ decides 0 and 1 decides 1 in S,
contradiction.
• Therefore, we can’t solve BA in
canonical graph, with 1 failure.
• As before, can generalize to
conn(G) ≤ 2f, or use a reduction.
4
1
1
0
A
1
333
0
1
0
4′
1
1
3′
1
1
0
2
0
S
1
2′
2
0
3
0
0
4
1
1
1′
Byzantine processor bounds
• The bounds n > 3f and conn > 2f are fundamental
for consensus-style problems with Byzantine
failures.
• Same bounds hold, in synchronous settings with f
Byzantine faulty processes, for:
– Byzantine Firing Squad synchronization problem
– | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
failures.
• Same bounds hold, in synchronous settings with f
Byzantine faulty processes, for:
– Byzantine Firing Squad synchronization problem
– Weak Byzantine Agreement
– Approximate agreement
• Also, in timed (partially synchronous settings), for
maintaining clock synchronization.
• Proofs used similar methods.
Weak Byzantine Agreement
[Lamport]
• Correctness conditions for BA:
– Agreement: No two nonfaulty processes decide on different values.
– Validity: If all nonfaulty processes start with the same v, then v is
the only allowable decision for nonfaulty processes.
– Termination: All nonfaulty processes eventually decide.
• Correctness conditions for Weak BA:
– Agreement: Same as for BA.
– Validity: If all processes are nonfaulty and start with the same v,
then v is the only allowed decision value.
– Termination: Same as for BA.
• Limits the situations where the decision is forced to go a
certain way.
• Similar style to validity condition for 2-Generals problem.
WBA Processor Bounds
• Theorem 4: Weak BA is solvable in an n-node
graph G, tolerating f faults, if and only if n > 3f and
conn(G) > 2f.
• Same bounds as for BA.
• Proof:
– “If”: Follows from results for ordinary BA.
– “Only if”:
• By constructions like those for ordinary BA, but slightly more
complicated.
• Show 3 vs. 1 here, rest LTTR.
Proof (3 vs. 1 Weak BA)
• By contradiction. Suppose algorithm A,
consisting of procs 1, 2, 3, solves WBA with 1
fault.
• Let α0 = execution in which everyone starts with 0
and there are no failures; results in decision 0.
• Let α1 = execution in which everyone starts with 1
and there are no failures; results in decision 1.
• Let b = upper bound on number of rounds for all
processes to decide, in both α0 and α1.
• Construct new system S from 2b copies of A:
1
A
3
2
0
1
1
3
0
1
2
2
0
1
3
1
S
1
3
0
1
2
2 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
3
0
1
2
2
0
1
3
1
S
1
3
0
1
2
2
0
1
3
1
0
1
Proof (3 vs. 1 Weak BA)
• Claim: Any two adjacent processes in S must
decide the same thing..
– Because it looks to them like they are in A, and they
must agree in A.
• So everyone decides the same in S.
• WLOG, all decide 1.
0
1
1
1
3
1
0
1
1
2
2
1
0
1
1
3
1
1
S
1
1
3
1
0
1
1
2
2
1
0
1
1
3
1
1
0
1
Proof (3 vs. 1 Weak BA)
• Now consider a block of 2b + 1 consecutive processes that
begin with 0:
1
2
0
0
• Claims:
3
0
1
0
2
0
3
0
1
0
2
0
3
0
– To all but the endpoints, the execution of S is indistinguishable from
α0, the failure-free execution in which everyone starts with 0, for 1
round.
– To all but two at each end, indistinguishable from α0 for 2 rounds.
– To all but three at each end, indistinguishable from α0 for 3 rounds.
– …
– To midpoint, indistinguishable for b rounds.
• But b rounds are enough for the midpoint to decide 0,
contradicting the fact that everyone decides 1 in S.
Lower bound on the number of
rounds for Byzantine agreement
Lower bound on number of rounds
• Notice that f+1 rounds are used in all the
agreement algorithms we’ve seen so far---both
stopping and Byzantine.
• That’s inherent: f+1 rounds are needed in the
worst-case, even for simple stopping failures.
• Assume an f-round algorithm A tolerating f faults,
and get a contradiction.
• Restrictions on A (WLOG):
– n-node complete graph.
– Decisions at end of round f.
– V = {0,1}
– All-to-all communication at every | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
LOG):
– n-node complete graph.
– Decisions at end of round f.
– V = {0,1}
– All-to-all communication at every round ≤ f.
Special case: f = 1
• Theorem 5: Suppose n ≥ 3. There is no n-process 1-fault
stopping agreement algorithm in which nonfaulty
processes always decide at the end of round 1.
• Proof: Suppose A exists.
– Construct a chain of executions, each with at most one failure, such
that:
• First has (unique) decision value 0.
• Last has decision value 1.
• Any two consecutive executions in the chain are indistinguishable to
some process i that is nonfaulty in both. So i must decide the same in
both executions, and the two must have the same decision values.
– Decision values in first and last executions must be the same.
– Contradiction.
Round lower bound, f = 1
• α0: All processes have input 0, no failures.
• …
• αk (last one): All inputs 1, no failures.
• Start the chain from α0.
• Next execution, α1, removes message 1 → 2.
– α0 and α1 indistinguishable to everyone except 1
and 2; since n ≥ 3, there is some other process.
– These processes are nonfaulty in both executions.
• Next execution, α2, removes message 1 → 3.
– α1 and α2 indistinguishable to everyone except 1
and 3, hence to some nonfaulty process.
• Next, remove message 1 → 4.
– Indistinguishable to some nonfaulty process.
0
0
0
0
0
0
0
0
0
0
0
0
Continuing…
• Having removed all of process 1’s
messages, change 1’s input from 0 to 1.
– Looks the same to everyone else.
• We can’t just keep removing messages,
since we are allowed at most one failure in
each execution.
• So, we continue by replacing missing
messages, one at a time.
• Repeat with process 2, 3, and 4, eventually
reach the last execution: all inputs 1, no
failures.
0
0
0
0
1 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
, and 4, eventually
reach the last execution: all inputs 1, no
failures.
0
0
0
0
1
0
0
0
1
0
0
0
1
1
1
1
Special case: f = 2
• Theorem 6: Suppose n ≥ 4. There is no n-process 2-fault
stopping agreement algorithm in which nonfaulty
processes always decide at the end of round 2.
• Proof: Suppose A exists.
– Construct another chain of executions, each with at most 2 failures.
• This time a bit longer and more complicated.
– Start with α0: All processes have input 0, no failures, 2 rounds:
– Work toward α n, all 1’s, no failures.
– Each consecutive pair is indistinguishable
0
to some nonfaulty process.
– Use intermediate execs αi, in which:
• Processes 1,…,i have initial value 1.
• Processes i+1,…,n have initial value 0.
• No failures.
0
0
0
Special case: f = 2
• Show how to connect α0 and α1.
– That is, change process 1’s initial value from 0 to 1.
– Other intermediate steps essentially the same.
• Start with α0, work toward killing p1 at the beginning, to
change its initial value, by removing messages.
• Then replace the messages, working back up to α1.
• Start by removing p1’s round 2 messages, one by one.
• Q: Continue by removing p1’s round 1 messages?
• No, because consecutive executions
would not look the same to anyone:
– E.g., removing 1 → 2 at round 1 allows
p2 to tell everyone about the failure.
0
0
0
0
Special case: f = 2
• Removing 1 → 2 at round 1 allows p2 to tell all other processes about
the failure:
0
0
0
0
vs.
0
0
0
0
• Distinguishable to everyone.
• So we must do something more elaborate.
• Recall that we can allow 2 processes to fail in some executions.
• Use many steps to remove a single round 1 message 1 → i; in these
steps, | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
allow 2 processes to fail in some executions.
• Use many steps to remove a single round 1 message 1 → i; in these
steps, both 1 and i will be faulty.
Removing p1’s round 1 messages
• Start with execution where p1 sends to everyone at round
1, and only p1 is faulty.
• Remove round 1 message 1 → 2:
– p2 starts out nonfaulty, so sends all its round 2 messages.
– Now make p2 faulty.
– Remove p2’s round 2 messages, one by one, until we reach an
execution where 1 → 2 at round 1, but p2 sends no round 2
messages.
– Now remove the round 1 message 1 → 2.
• Executions look the same to all but 1 and 2 (and they’re nonfaulty).
– Replace all the round 2 messages from p2, one by one, until p2 is
no longer faulty.
• Repeat to remove p1’s round 1 messages to p3, p4,…
• After removing all of p1’s round 1 messages, change p1’s
initial value from 0 to 1, as needed.
General case: Any f
• Theorem 7: Suppose n ≥ f + 2. There is no n-process f-
fault stopping agreement algorithm in which nonfaulty
processes always decide at the end of round f.
• Proof: Suppose A exists.
– Same ideas, longer chain.
– Must fail f processes in some executions in the chain, in order to
remove all the required messages, at all rounds.
– Construction in book, LTTR.
• Newer proof [Aguilera, Toueg]:
– Uses ideas from [FLP] impossibility of consensus.
– They assume strong validity, but the proof works for our weaker
validity condition also.
Lower bound on rounds,
[Aguilera, Toueg]
• Proof:
– By contradiction. Assume A solves stopping agreement for f
failures and everyone decides after exactly f rounds.
– Restrict attention to executions in which at most one process fails
during each round.
– Recall failure at a round allows process to miss sending an arbitrary
subset of the messages, or to send all but halt before changing
state.
– Consider vector of initial values as a 0-round execution.
– Def | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
subset of the messages, or to send all but halt before changing
state.
– Consider vector of initial values as a 0-round execution.
– Defs (adapted from [Fischer, Lynch, Paterson]): α, an execution
that completes some finite number (possibly 0) of rounds, is:
• 0-valent, if 0 is the only decision that can occur in any execution (of the
kind we consider) that extends α.
• 1-valent, if 1 is…
• Univalent, if α is either 0-valent or 1-valent (essentially decided).
• Bivalent, if both decisions occur in some extensions (undecided).
Initial bivalence
• Lemma 1: There is some 0-round execution
(vector of initial values) that is bivalent.
• Proof (adapted from [FLP]):
– Assume for contradiction that all 0-round executions are
univalent.
– 000…0 is 0-valent
– 111…1 is 1-valent
– So there must be two 0-round executions that differ in
the value of just one process, say i, such that one is 0-
valent and the other is 1-valent.
– But this is impossible, because if process i fails at the
start, no one else can distinguish the two 0-round
executions.
Bivalence through f-1 rounds
• Lemma 2: For every k, 0 ≤ k ≤ f-1, there is a bivalent k-
round execution.
• Proof: By induction on k.
– Base (k=0): Lemma 1.
– Inductive step: Assume for k, show for k+1, where k < f -1.
• Assume bivalent k-round execution α.
• Assume for contradiction that every 1-round
extension of α (with at most one new failure)
is univalent.
• Let α* be the 1-round extension of α in
which no new failures occur in round k+1.
• By assumption, this is univalent, WLOG 1-
valent.
α
α*
α0
round k+1
• Since α is bivalent, there must be another 1-
round extension of α, α0, that is 0-valent.
1-valent
0-valent
Bivalence through f-1 rounds
α
α*
α | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
α0, that is 0-valent.
1-valent
0-valent
Bivalence through f-1 rounds
α
α*
α0
round k+1
1-valent
0-valent
•
In α0, some single process i fails in round
k+1, by not sending to some subset of the
processes, say J = {j1, j2,…jm}.
• Define a chain of (k+1)-round executions,
α0, α1, α2,…, αm.
• Each αl in this sequence is the same as α0
except that i also sends messages to j1,
j2,…jl.
– Adding in messages from i, one at a time.
• Each αl is univalent, by assumption.
• Since α0 is 0-valent, there are 2 possibilities:
– At least one of these is 1-valent, or
– All of these are 0-valent.
Case 1: At least one αl is 1-valent
• Then there must be some l such that αl-1 is 0-
valent and αl is 1-valent.
• But αl-1 and αl differ after round k+1 only in the
state of one process, jl.
• We can extend both αl-1 and αl by simply failing jl at
beginning of round k+2.
– There is actually a round k+2 because we’ve assumed k
< f-1, so k+2 ≤ f.
• And no one left alive can tell the difference!
• Contradiction for Case 1.
Case 2: Every αl is 0-valent
• Then compare:
– αm, in which i sends all its round k+1 messages and then fails, with
– α* , in which i sends all its round k+1 messages and does not fail.
• No other differences, since only i fails at round k+1 in αm.
• αm is 0-valent and α* is 1-valent.
• Extend to full f-round executions:
– αm, by allowing no further failures,
– α*, by failing i right after round k+1 and then allowing no further
failures.
• No one can tell the difference.
• Contradiction for Case 2.
• So we’ve proved:
• | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
then allowing no further
failures.
• No one can tell the difference.
• Contradiction for Case 2.
• So we’ve proved:
• Lemma 2: For every k, 0 ≤ k ≤ f-1, there is a bivalent k-
round execution.
And now the final round…
• Lemma 3: There is an f-round execution in which two
nonfaulty processes decide differently.
• Contradicts the problem requirements.
• Proof:
– Use Lemma 2 to get a bivalent (f-1)-round execution α
with ≤ f-1 failures.
– In every 1-round extension of α, everyone who hasn’t
failed must decide (and agree).
– Let α* be the 1-round extension of α in which no new
failures occur in round f.
– Everyone who is still alive decides after α*, and they
must decide the same thing. WLOG, say they decide 1.
α
– Since α is bivalent, there must be another 1-round
extension of α, say α0, in which some nonfaulty process
decides 0 (and hence, all decide 0).
α*
α0
round f
decide 1
decide 0
Disagreement after f rounds
In α0, some single process i fails in round f.
•
• Let j, k be two nonfaulty processes.
• Define a chain of three f-round executions, α0, α1, α*,
where α1 is identical to α0 except that i sends to j in α1
(it might not in α0).
• Then α1 ~k α0.
• Since k decides 0 in α0, k also decides 0 in α1.
• Also, α1 ~j α*.
• Since j decides 1 in α*, j also decides 1 in α1.
• Yields disagreement in α1, contradiction!
α
α*
α0
round f
decide 1
decide 0
• So we have proved:
• Lemma 3: There is an f-round execution in which two nonfaulty
processes decide differently.
• Which immediately yields the impossibility result.
Early-stopping agreement algorithms
• Tolerate f failures in general, but in executions with f′ < f
•
•
failures, terminate faster.
[Dolev, Reischuk | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
Tolerate f failures in general, but in executions with f′ < f
•
•
failures, terminate faster.
[Dolev, Reischuk, Strong 90] Stopping agreement
algorithm in which all nonfaulty processes terminate in ≤
min(f′ + 2, f+1) rounds.
– If f′ + 2 ≤ f, decide “early”, within f′ + 2 rounds; in any case decide
within f+1 rounds.
[Keidar, Rajsbaum 02] Lower bound of f′ + 2 for early-
stopping agreement.
– Not just f′ + 1. Early stopping requires an extra round.
• Theorem 8: Assume 0 ≤ f′ ≤ f – 2 and f < n. Every early-
stopping agreement algorithm tolerating f failures has an
execution with f′ failures in which some nonfaulty process
doesn’t decide by the end of round f′ + 1.
Special case: f′ = 0
• Theorem 9: Assume 2 ≤ f < n. Every early-stopping agreement
algorithm tolerating f failures has a failure-free execution in which some
nonfaulty process does not decide by the end of round 1.
• Definition: Let α be an execution that completes some finite number
(possibly 0) of rounds. Then val(α) is the unique decision value in the
extension of α with no new failures.
– Different from bivalence defs---now consider value in just one extension.
• Proof:
– Again, assume executions in which at most one process fails per round.
– Identify 0-round executions with vectors of initial values.
– Assume, for contradiction, that everyone decides by round 1, in all failure-
free executions.
– val(000…0) = 0, val(111…1) = 1.
– So there must be two 0-round executions α0 and α1, that differ in the value
of just one process i, such that val(α0) = 0 and val(α1) = 1.
Special case: f′ = 0
• 0-round executions α0 and α1, differing only in the initial value of
•
process i, such that val(α0) = 0 and val(α1) = 1.
In the ff extensions of α0 and α1, all | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
such that val(α0) = 0 and val(α1) = 1.
In the ff extensions of α0 and α1, all nonfaulty processes decide in just
one round.
• Define:
– β0, 1-round extension of α0, in which process i fails, sends only to j.
– β1, 1-round extension of α1, in which process i fails, sends only to j.
• Then:
– β0 looks to j like ff extension of α0, so j decides 0 in β0 after 1 round.
– β1 looks to j like ff extension of α1, so j decides 1 in β1 after 1 round.
• β0 and β1 are indistinguishable to all processes except i, j.
• Define:
– γ 0, infinite extension of β0, in which process j fails right after round 1.
– γ 1, infinite extension of β1, in which process j fails right after round 1.
• By agreement, all nonfaulty processes must decide 0 in γ 0, 1 in γ 1.
• But γ 0 and γ 1 are indistinguishable to all nonfaulty processes, so they
can’t decide differently, contradiction.
Next time…
• Other kinds of consensus problems:
– k-agreement
– Approximate agreement (skim)
– Distributed commit
• Reading: Chapter 7
MIT OpenCourseWare
http://ocw.mit.edu
6.852J / 18.437J Distributed Algorithms
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0d1d84b9fd1e74bf8e2f8ad916cfca0d_MIT6_852JF09_lec05.pdf |
(cid:2)
2/25/16(cid:2)
2.341J Lecture 6: Extensional Rheometry:
From Entangled Melts to Dilute Solutions
Professor Gareth H. McKinley
Dept. of Mechanical Engineering, M.I.T.
Cambridge, MA 01239
http://web.mit.edu/nnf
1(cid:2)
The Role of Fluid Rheology(cid:1)
•
“Slimy”y
•
”
“Sticky”
3(cid:2)0(cid:2)
• Other manifestations: ‘stringy’, ‘tacky’, ‘stranding’, ‘ropiness’, ‘pituity’,
‘long’ vs. ‘short’ texture...
2(cid:2)
1(cid:2)
2/25/16(cid:2)
Kinematics of Deformation(cid:1)
• As we have seen, there are three major classes of extensional flow:
Simple Shear(cid:2)
vx = (cid:1)γy
Simple Shear-Free Flow(cid:2)
Fiber-spinning
Fiber-spinning(cid:1)
Thermoforming(cid:1) Calendering/rolling(cid:1)
vx = − 1
(cid:1)ε0 (1+ b)x
2
v y = − 1
2 (cid:1)ε0 (1− b)y
vz =
(cid:1)ε0z
b = flow type parameter(cid:1)
Bird, Armstrong & Hassager, (1987)(cid:1)
3(cid:2)
© John Wiley and Sons. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Fred Trouton (one paper on Rheology; 110 years ago)(cid:1)
doi:10.1016/S0377-0257(06)00214-X
This image is in the public domain.
4(cid:2)
© Royal Society. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
2(cid:2)
Transient Extensional Rheometry(cid:1)
McKinley, (cid:2)
Ann. Rheol. Reviews 2005(cid:2)
• The extensional viscosity is best considered as a transient material | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
, (cid:2)
Ann. Rheol. Reviews 2005(cid:2)
• The extensional viscosity is best considered as a transient material
function: follow evolution from equilibrium to steady state (or break-up!)
106(cid:2)
Melts(cid:1)
105(cid:2)
104(cid:2)
Meissner (cid:2)
Apparatus (RME)(cid:2)
SER Fixture(cid:2)
Zero-Shear Rate Viscosity [Pa.s](cid:2)
103(cid:2)
102(cid:2)
Capillary Thinning & Breakup Rheometers(cid:1)
100(cid:2)
101(cid:2)
10-1(cid:2)
Dilute Solutions(cid:1)
10-2(cid:2)
10-3(cid:2)
Opposed Jet Devices(cid:2)
Contraction Flows (microfluidic)(cid:2)
Filament (cid:1)
Stretching (cid:1)
Rheometers(cid:1)
(FISER)(cid:1)
20 μm(cid:2)
Sentmanat, Wang & McKinley, J. Rheol. May 2005(cid:2)
McKinley & Sridhar, (cid:2)
Ann. Reviews of Fluid Mech, 2002(cid:2)
Pipe & McKinley,
Rheol. Acta (AERC 2007)(cid:2)
5(cid:2)
Importance of Extensional Rheology(cid:1)
• Dominates many processing operations
•
Effects are frequently transient in nature…
(b)(cid:2)
(f)(cid:2)
(a)(cid:2)
(cid:2)
m
m
5
3
.
6
(c)(cid:2)
2 cm(cid:2)
20 μm(cid:2)
(d)(cid:2)
(e)(cid:2)
(e)(cid:2)
© The British Society of Rheology. All rights reserved. This content is
excluded from our Creative Commons license. For more information,
see https://ocw.mit.edu/help/faq-fair-use/.
© Air Products and Chemicals, Inc. All rights
reserved. This content is excluded from our
Creative Commons license. For more information,
see https://ocw.mit.edu/help/faq-fair-use/.
6(cid:2)
2/25/16(cid:2)
3(cid:2)
Nonlinear Extension | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
.edu/help/faq-fair-use/.
6(cid:2)
2/25/16(cid:2)
3(cid:2)
Nonlinear Extensional Viscosity(cid:1)
• Extensional flows are “strong flows” which result in extensive molecular
deformation, microstructural alignment and high tensile stresses
(cid:1) Applications: fiber-spinning, blow-molding, sheet-molding, extrusion, coating
Strain Hardening(cid:1)
Tension-Thickening(cid:1)
(cid:2)
y
t
i
s
o
c
s
i
V
l
a
n
o
i
s
n
e
t
x
E
‘stretch’(cid:2)
Newtonian(cid:2)
(cid:2)
y
t
i
s
o
c
s
i
V
l
a
n
o
i
s
n
e
t
x
E
b = 0(cid:2)
‘coil’(cid:2)
˙ ε 0t
Solution(cid:2)
Melt(cid:2)
λ˙ ε 0
ηE
[
+ ( ˙ ε 0 ,t) = τzz (t) −τrr (t)
+ ( ˙ ε 0,t ) → ηE( ˙ ε 0 )
ηE
lim
t→∞
Trouton (1906):(cid:2) ηE = 3μ
] ˙ ε 0
Transient Response(cid:2)
Steady-State Response ?(cid:2)
Molecular alignment
increases with Hencky strain:(cid:2)
Increasing molecular
alignment at higher strain rates(cid:2)
t
ε(t) = ∫
−∞
ε˙ (t ′ ) dt ′ = ln
L
(t)
L0
Deborah(cid:2)
Number(cid:2)
De = λ(cid:1)ε0
7(cid:2)77
© source unknown. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Tensile Stress Surface(cid:1)
ηE ( (cid:1)ε0, t) ⇔ Tr | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
/faq-fair-use/.
Tensile Stress Surface(cid:1)
ηE ( (cid:1)ε0, t) ⇔ Tr(De, ε)
• The evolution of the extensional viscosity can be visualized as a function of
time and strain rate.
Increasing viscosity
(strain hardening)
Stretching of branched
species for Des > 1
Constant
velocity
stretching
Tr(De,t) =
+
ηE
η0
Increasing
strain rate
Des = (cid:1)ε ×τs1
Necking
t s[ ]
Time of
experiment
8(cid:2)
2/25/16(cid:2)
4(cid:2)
Polymer Melts(cid:1)
Sentmanat, Rheol. Acta (2004)
• SER Universal Testing Platform: specifically designed
so that it can be easily accommodated onto a number
of commercially available torsional rheometers
• TA Instruments version;
•
EVF = Extensional Viscosity Fixture
Can be housed within the host system’s environmental
chamber for controlled temperature experiments.
(cid:1) Requires only 5-200mg of material
(cid:1) Can be used up to temperatures of 250°C
(cid:1) Easily detachable for fixture changeover/clean-up
1.5”
1.5”
Validation Experiments: LDPE (BASF Lupolen® 1840H)
(Sentmanat, Wang & McKinley; JoR Mar/Apr (2005)
(cid:1) Mn = 17,000; Mw = 243,000; Mw/ Mn = 14.3
(cid:1) CH3/1000C = 23
(cid:1) Very similar to the IUPAC A reference material
(cid:1) Same polymer as that used by
Münstedt et al., Rheol. Acta 37, 21-29 (1998)
‘Münstedt rheometer’ (end separation method)
SER Principle of Operation(cid:1)
•
•
“Constant Sample Length” Extensional Rheometer
Ends of sample affixed to windup drums, such that
for a constant drum rotation:
.
(cid:1)ε0 = 2ΩR L | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
sample affixed to windup drums, such that
for a constant drum rotation:
.
(cid:1)ε0 = 2ΩR L
•
Resulting torque on transducer (attached to housing)
T = 2(F + Ffriction )R
SER Fixture with ARES Rheometer(cid:2)
9(cid:2)
10(cid:2)
2/25/16(cid:2)
5(cid:2)
2/25/16(cid:2)
Comparison of LDPE Stress Growth Curves(cid:1)
• Good agreement with LVE response at short times (t ≥ 0.01 s)
Increasing strain-hardening and sample rupture at high rates
•
+ = 3η+ (t)
ηE
• The results with the SER (red curves) show excellent agreement
with literature results from Münstedt et al. (black symbols & lines)
The Role of Chain-Branching(cid:1)
• Extensional stress growth is a
strong function of the level of
molecular branching.
• Branch points act as ‘crosslinks’
that efficiently elongate chains
and transmit stress…
(cid:1) Provided they are long enough to
be entangled
H.M. Laun, Int. Cong. Rheol. 1980(cid:2)
11(cid:2)
12(cid:2)
© Springer. All rights reserved. This content is excluded
from our Creative Commons license. For more information,
see https://ocw.mit.edu/help/faq-fair-use/.
6(cid:2)
2/25/16(cid:2)
Results for Simple Fluids(cid:1)
• Newtonian Fluids
(cid:1) McKinley & Tripathi, J Rheol., 2000
• Ideal Elastic Fluids
(cid:1) Entov & Hinch, JNNFM, 1997
Rmid
R0
= 0.0709
σ
ηsR0
(
tc − t
)
Rmid
R0
=
GR0
σ
⎛
⎝
1/ 3
⎞
⎠
⎛
exp −
⎜
⎝
t
3λ1
⎞
⎟
⎠
(cid:2)
l
i
O
e
n
e
r
y
t | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
�
⎟
⎠
(cid:2)
l
i
O
e
n
e
r
y
t
s
y
l
o
P
Laser(cid:1)
Micrometer(cid:1)
tcap =
η0R0
σ
~ 8 sec
2R0 = 6 mm(cid:2)
S
M
-
1
(
5
0
0
p
p
m
2
x
1
0
6
g
/
m
o
l
.
(cid:2)
13(cid:2)
Dilute Polymer Solutions(cid:1)
Increasing molecular weight delays elasto-capillary breakup
•
• Measured time-scale agrees quantitatively with Zimm relaxation time
1
0
D
D
/
0.1
SM3
SM2
SM1
−1 (3λZimm )
SM-1 Fluid
SM-2 Fluid
SM-3 Fluid
Oldroyd-B
103
103
103
102
102
102
.
]
c
e
s
[
λ
]
s
]
a
s
P
.
[
a
p
η
P
z
[
,
]
c
p
e
s
η
[
λz
101
101
101
100
100
100
106
106
106(cid:2)
107
107
M
w
[g/mol]
M
[g/mol]
Mw [g/mol](cid:2)
w
0.01
0
5
10
15
20
/σ)
R
t/(η0
0
25
30
35
40
Finite Extensibility(cid:1)
•
Important in many biological processes (saliva, spinning of spider silk)
14(cid:2)
© The British Society of Rheology. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
7(cid:2)
2/25/16(cid:2)
Flow Through Porous Media(cid:2)
• Solutions of flexible polymers and self-assembling wormlike surfactants are
commonly used in enhanced oil recovery (EOR) and reservoir fracturing operations.
Kefi et al. (2004) Oilfield Review | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
commonly used in enhanced oil recovery (EOR) and reservoir fracturing operations.
Kefi et al. (2004) Oilfield Review(cid:2)
© Schlumberger. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
(cid:2)
s
s
e
l
n
o
i
s
n
e
m
D
i
(cid:2)
p
o
r
d
e
r
u
s
s
e
r
P
© Schlumberger. All rights reserved.
This content is excluded from our
Creative Commons license. For more
information, see https://ocw.mit.edu/
help/faq-fair-use/.
Müller & Sanz in Kausch & Nguyen, 1997(cid:2)
15(cid:2)
Extensional Viscosity of Wormlike Surfactants(cid:2)
• Solutions of EHAC (Schlumberger VES
“clearfrac”,) in brine
Dmid (t) ⇒ (cid:2)ε(t) = − 2
Dmid
dDmid
dt
⇒ η
E,apparent (t)
Yesilata, Clasen & McKinley, JNNFM 133(2) 2006(cid:2)
Kim, Pipe, McKinley; Kor-Aust Rheol. J, 2010(cid:2)
CC by-nc-nd. Some rights reserved. This content is excluded from our Creative Commons
license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
16(cid:2)
8(cid:2)
2/25/16(cid:2)
Drag Reduction and Jet Breakup(cid:1)
• Extensional effects from polymer additives can dramatically reduce the
extent of turbulent dissipation in high Reynolds number flows(cid:2)
• Applications include:(cid:2)
(cid:1) Wake reduction: sailing, submarines, high-speed swimming (dolphins)(cid:2)
(cid:1) Flow-rate enhancement: storm drains, firehoses...(cid:2)
Union-Carbide(cid:2)
“Rapid Water”!!(cid:2)
From W.R. Schowalter “Non-Newtonian Fluid | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
Union-Carbide(cid:2)
“Rapid Water”!!(cid:2)
From W.R. Schowalter “Non-Newtonian Fluid Mechanics”( Pergamon) 1978(cid:2)
© Pergamon Press. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
Commercial Interest in Jet (?) Breakup(cid:1)
New York Times, Dec 4th, 1984(cid:1)
“..if a few pieces of spaghetti are withdrawn gently there is little resistance.(cid:2)
But if you jerk it out, it pulls on more than you have hold of. The resistance (cid:2)
Is much higher.” (strain-hardening)(cid:2)
© The New York Times Company. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
9(cid:2)
2/25/16(cid:2)
Rayleigh Time Scale
tR ≈
3
ρR0
σ
≈ 0.020s
Break Up of Low Viscosity Liquids(cid:1)
• 0.1 wt% PEO (2x106 g/mol) in water
η0 ≈ 1.10 ×10−3
(cid:1) Standard test fluid;
(cid:1) Plate diameter R0 = 3mm; Aspect Ratio (cid:1) = 2.3
Pa.s
Rodd, Cooper-White, McKinley; Applied Rheol. 15(1), 12-27; 2005(cid:2)
19(cid:2)
Low Viscosity Fluids(cid:1)
• Critical to Inkjet Printing
(cid:1) 100-1000 drops per second
(cid:1) Drop volume 2-10 picoliter
(cid:1) Eliminate formation of spray and “satellite
droplets”
•
Identical viscosities and surface tensions
(cid:1) One contains Poly(ethylene oxide) (PEO)
Mw = 1,000,000 g/mol, c = 100 ppm
Elastic effects (cid:2)
Inertio-capillary effects (cid:2)
De | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
cid:24)(cid:32)(cid:36)(cid:1)
(cid:14)(cid:23)(cid:21)(cid:17)(cid:30)(cid:1)(cid:16)(cid:24)(cid:31)(cid:19)(cid:28)(cid:31)(cid:24)(cid:32)(cid:36)(cid:1)(cid:7)(cid:6)(cid:10)(cid:6)(cid:38)(cid:12)(cid:1)(cid:9)(cid:28)(cid:20)(cid:21)(cid:25)(cid:1)(cid:1)
(cid:6)(cid:35)(cid:32)(cid:21)(cid:27)(cid:31)(cid:24)(cid:28)(cid:27)(cid:17)(cid:25)(cid:1)(cid:16)(cid:24)(cid:31)(cid:19)(cid:28)(cid:31)(cid:24)(cid:32)(cid:36)(cid:1)(cid:7)(cid:6)(cid:10)(cid:6)(cid:38)(cid:12)(cid:1)(cid:9)(cid:28)(cid:20)(cid:21)(cid:25)(cid:1)(cid:1)
The limit of infinite dilution: single molecule experiments(cid:1)
•
Label DNA as a model “supersized” single flexible molecule
The Cross Slot Apparatus: A free-standing stagnation point : Perkins, Smith Chu, Science 1997(cid:2)
De Gennes (Perspective): “molecular individualism”(cid:1)
© AAAS. All rights reserved. This content is excluded from our Creative Commons
license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
22(cid:2)
11(cid:2)
Polymer Additives and Extensional Rheology(cid:2)
∇∇v =
•
•
Large axial stresses (“Streamline tension”)
Transient extensional stretching of chains and tensile stress growth...
103
SM-1 Fluid
(d)
(cid:2)ε
0
2
⎛ −1
⎜
0
⎜
⎝ 0
0
−1
0
0 ⎞
⎟
0
⎟ | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
⎜
⎝ 0
0
−1
0
0 ⎞
⎟
0
⎟
+2⎠
o
i
t
a
R
n
o
t
u
o
r
T
102
101
100
0
(c)
(a)
(b)
O(L2 )~Mw (cid:2)
Tr = 3
1
2
3
4
5
6
7
Hencky Strain
FENE-PM Theory(cid:2)
De = 17
M.I.T.
Monash De = 14
Toronto De = 12
ε
De = λ ˙ 1
0
Anna et al. J. Rheol. 2001(cid:2)
© The Society of Rheology, Inc. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see https://ocw.mit.edu/help/faq-fair-use/.
(cid:2)
)
y
t
i
s
o
c
s
v
i
r
a
e
h
s
(
/
)
y
t
i
s
o
c
s
v
i
l
i
a
n
o
s
n
e
t
x
e
(
Increasing (cid:2)
Flow strength(cid:2)
Smith, Chu, Larson, Science 1997(cid:2)
© AAAS. All rights reserved. This content is excluded
from our Creative Commons license. For more
information, see https://ocw.mit.edu/help/faq-fair-use/.
Doyle, Shaqfeh, Spiegelberg, McKinley, JNNFM 1997(cid:2)
Summary(cid:2)
• A number of well-characterized instruments now exist for performing
measurements of transient extensional rheometry for fluids spanning range from
dilute solution to the melt
(cid:2)
(cid:2)
‘constant volume’ devices: e.g. FISER, CABER, Münstedt Rheometer
‘constant length’ devices: e.g. EVF, SER, RME = Meissner Rheometer
• Understanding the kinematics imposed by the instrument and the dynamics of
filament evolution is essential in order to extract | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
= Meissner Rheometer
• Understanding the kinematics imposed by the instrument and the dynamics of
filament evolution is essential in order to extract the true material functions
• Challenges still remain:
(cid:2) Theory for filament deformation and rupture at very high strains
(cid:2) Measurements for ‘weakly elastic’ materials “non-spinnable materials”
(cid:2) Understanding and exploiting extensional viscosity on the microscale:
N. Kojic et al.(cid:2)
L. Mahadevan, Harvard(cid:2)
R. Cohn, U. Louisville(cid:2)
© The Company of Biologists Ltd.
All rights reserved. This content is
excluded from our Creative Commons
license. For more information, see
https://ocw.mit.edu/help/faq-fair-use/.
© source unknown. All
rights reserved. This content
is excluded from our
Creative Commons license.
For more information, see
https://ocw.mit.edu/help/
faq-fair-use/.
© source unknown. All rights reserved.
This content is excluded from our Creative
Commons license. For more information,
see https://ocw.mit.edu/help/faq-fair-use/.
2/25/16(cid:2)
12(cid:2)
MIT OpenCourseWare
https://ocw.mit.edu
2.341J / 10.531J Macromolecular Hydrodynamic
Spring 2016
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/0d326f50e4b153281740a7d4b933e5ea_MIT2_341JS16_Lec06-slides.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.854J / 18.415J Advanced Algorithms
Fall 2008
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
�
�
18.415/6.854 Advanced Algorithms
September 17, 2008
Lecturer: Michel X. Goemans
Lecture 5
Today, we continue the discussion of the minimum cost circulation problem. We first review the
Goldberg-Tarjan algorithm, and improve it by allowing more flexibility in the selection of cycles.
This gives the Cancel-and-Tighten algorithm. We also introduce splay trees, a data structure which
we will use to create another data structure, dynamic trees, that will further improve the running
time of the algorithm.
1 Review of the Goldberg-Tarjan Algorithm
Recall the algorithm of Golberg and Tarjan for solving the minimum cost circulation problem:
1. Initialize the flow with f = 0.
2. Repeatedly push flow along the minimum mean cost cycle Γ in the residual graph Gf , until
no negative cycles exist.
We used the notation
µ(f ) = min
cycle Γ⊆Ef Γ|
c(Γ)
|
to denote the minimum mean cost of a cycle in the residual graph Gf . In each iteration of the
algorithm, we push as much flow as possible along the minimum mean cost cycle, until µ(f ) ≥ 0.
We used �(f ) to denote the minimum � such that f is �-optimal. In other words
�(f ) = min{� : ∃ potential p : V
→
R such that cp(v, w) ≥ −� for all edges (v, w) ∈ Ef }.
We proved that for all circulations f ,
�(f ) = −µ | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
≥ −� for all edges (v, w) ∈ Ef }.
We proved that for all circulations f ,
�(f ) = −µ(f ).
A consequence of this equality is that there exists a potential p such that any minimum mean cost
cycle Γ satisfies cp(v, w) = −�(f ) = µ(f ) for all (v, w) ∈ Γ, since the cost of each edge is bounded
below by mean cost of the cycle.
1.1 Analysis of Goldberg-Tarjan
Let us recall the analysis of the above algorithm. This will help us to improve the algorithm in order
to achieve a better running time. Please refer to the previous lecture for the details of the analysis.
We used �(f ) as an indication of how close we are to the optimal solution. We showed that �(f )
is a non-increasing quantity, that is, if f � is obtained by f after a single iteration, then �(f �) ≤ �(f ).
It remains to show that �(f ) decreases “significantly” after several iterations.
Lemma 1 Let f be any circulation, and f � be the circulation obtained after m iterations of the
Goldberg-Tarjan algorithm. Then
�1
n
We showed that if the costs are all integer valued, then we are done as soon as we reach �(f ) < 1 . n
Using these two facts, we showed that the number of iterations of the above algorithm is at most
O(mn log(nC)). An alternative analysis using �-fixed edges provides a strongly polynomial bound
of O(m2n log n) iterations. Finally, the running time per a single iteration is O(mn) using a variant
of Bellman-Ford (see problem set).
�(f �) ≤ 1 −
�(f ).
�
5-1
1.2 Towards a faster algorithm
In the above algorithm, a significant amount | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
1.2 Towards a faster algorithm
In the above algorithm, a significant amount of time is used to compute the minimum cost cycle.
This is unnecessary, as our goal is simply to cancel enough edges in order to achieve a “significant”
improvement in � once every several iterations.
We can improve the algorithm by using a more flexible selection of cycles to cancel. The idea of
the Cancel-and-Tighten algorithm is to push flows along cycles consisting entirely of negative cost
edges. For a given potential p, we push as much flow as possible along cycles of this form, until no
more such cycles exist, at which point we update p and repeat.
2 Cancel-and-Tighten
2.1 Description of the Algorithm
Definition 1 An edge is admissible with respect to a potential p if cp(v, w) < 0. A cycle Γ is
admissible if all the edges of Γ are admissible.
Cancel and Tighten Algorithm (Goldberg and Tarjan):
1. Initialization: f ← 0, p ← 0, � ← max(v,w)∈E c(v, w), so that f is �-optimal respect to p.
2. While f is not optimum, i.e., Gf contains a negative cost cycle, do:
(a) Cancel: While Gf contains a cycle Γ which is admissible with respect to p, push as much
flow as possible along Γ.
(b) Tighten: Update p to p� and � to ��, where p� and �� are chosen such that cp� (v, w) ≥ −��
�
1 �.
for all edges (v, w) ∈ Ef and �� ≤ 1 − n
�
Remark 1 We do not update the potential p every time we push a flow. The potential | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
1 − n
�
Remark 1 We do not update the potential p every time we push a flow. The potential p gets updated
in the tighten step after possibly several flows are pushed through in the Cancel step.
Remark 2 In the tighten step, we do not need to find p� and �� such that �� is as small as possible;
it is only necessary to decrease � by a factor of at least 1 − 1 . However, in practice, one tries to
decrease � by a smaller factor in order to obtain a better running time.
n
Why is it always possible to obtain improvement factor of 1 − 1 in each iteration? This is
guaranteed by the following result, whose proof is similar to the proof used in the analysis during
the previous lecture.
n
Lemma 2 Let f be a circulation and f � be the circulation obtained by performing the Cancel step.
Then we cancel at most m cycles, and
�
�(f �) ≤ 1 −
�
1
n
�(f ).
Proof: Since we only cancel admissible edges, after any cycle is canceled in the Cancel step:
• All new edges in the residual graph are non-admissible, since the edge costs are skew-symmetric;
• At least one admissible edge is removed from the residual graph, since we push the maximum
possible amount of flow through the cycle.
5-2
Since we begin with at most m admissible edges, we cannot cancel more than m cycles, as each cycle
canceling reduces the number of admissible edges by at least one.
After the cancel step, every cycle Γ contains at least one non-admissible edge, say (u1, v1) ∈ Γ | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
every cycle Γ contains at least one non-admissible edge, say (u1, v1) ∈ Γ
with cp(u1, v1) ≥ 0. Then the mean cost of Γ is
c(Γ)
|Γ|
≥
1 �
|Γ|
cp(u, v) ≥
−(|Γ| − 1)
|Γ|
�
1 �(f ).
Therefore, �(f �) = −µ(f �) ≤ 1 − n
(u1,v1 )=(u,v)∈Γ
�
�(f ) = − 1 −
�(f ) ≥ −
1 −
�
�
1
|Γ|
�
�
1
n
�(f ).
�
2.2
Implementation and Analysis of Running Time
2.2.1 Tighten Step
We first discuss the Tighten step of the Cancel-and-Tighten algorithm. In this step, we wish to find
a new potential function p� and a constant �� such that cp� (v, w) ≥ −�� for all edges (v, w) ∈ Ef
�
1 �. We can find the smallest possible �� in O(mn) time by using a variant of the
and �� ≤ 1 − n
Bellman-Ford algorithm. However, since we do not actually need to find the best possible ��, it is
possible to vastly reduce the running time of the Tighten step to O(n), as follows.
�
When the Cancel step terminates, there are no cycles in the admissible graph Ga = (V, A), the
subgraph of the residual graph with only the admissible edges. This implies that there exists a
topological sort of the admissible graph. | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
missible edges. This implies that there exists a
topological sort of the admissible graph. Recall that a topological sort of a directed acyclic graph
is a linear ordering l : V → {1, . . . , n} of its vertices such that l(v) < l(w) if (v, w) is an edge of the
graph; it can be achieved in O(m) time using a standard topological sort algorithm (see, e.g., CLRS
page 550). This linear ordering enables us to define a new potential function p� by the equation
p�(v) = p(v) − l(v)�/n. We claim that this potential function satisfies our desired properties.
Claim 3 The new potential function p�(v) = p(v) − l(v)�/n satisfies the property that f is ��-optimal
with respect to p� for some constant �� ≤ (1 − 1/n)�.
Proof: Let (v, w) ∈ Ef , then
cp� (v, w) = c(v, w) + p�(v) − p�(w)
= c(v, w) + p(v) − l(v)�/n − p(w) + l(w)�/n
= cp(v, w) + (l(w) − l(v))�/n.
We consider two cases, depending on whether or not l(v) < l(w).
Case 1: l(v) < l(w). Then
cp� (v, w) = cp(v, w) + (l(w) − l(v))�/n
≥ −� + �/n
= −(1 − 1/n)�.
Case 2: l(v) > l(w), so that (v, w) is not an admissible edge. Then
cp� (v, w) = cp(v, w) + (l(w) − l(v))�/n
≥ 0 − (n − 1)�/n
= −( | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
) + (l(w) − l(v))�/n
≥ 0 − (n − 1)�/n
= −(1 − 1/n)�.
In either case, we see that f is ��-optimal with respect to p�, where �� ≤ (1 − 1/n)�.
�
5-3
�
2.2.2 Cancel Step
We now shift our attention to the implementation and analysis of the Cancel step. Na¨ıvely, it takes
O(m) time to find a cycle in the admissible graph Ga = (V, A) (e.g., using Depth-First Search) and
push flow along it. Using a more careful implementation of the Cancel step, we shall show that each
cycle in the admissible graph can be found in an “amortized” time of O(n).
We use a Depth-First Search (DFS) approach, pushing as much flow as possible along an ad
missible cycle and removing saturated edges, as well as removing edges from the admissible graph
whenever we determine that they are not part of any cycle. Our algorithm is as follows:
Cancel(Ga = (V, A)): Choose an arbitrary vertex u ∈ V , and begin a DFS rooted at u.
1. If we reach a vertex v that has no outgoing edges, then we backtrack, deleting from A the
edges that we backtrack along, until we find an ancestor r of v for which there is another child
to explore. (Notice that every edge we backtrack along cannot be part of any cycle.) Continue
the DFS by exploring paths outgoing from r.
2. If we find a cycle Γ, then we push the maximum possible flow through it. This causes at
least one edge along Γ to be saturated. We remove the saturated edges from A, and start
the depth-first-search from scratch using | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
saturated. We remove the saturated edges from A, and start
the depth-first-search from scratch using G�
a = (V, A�), where A� denotes A with the saturated
edges removed.
Every edge that is not part of any cycle is visited at most twice (since it is removed from the
admissible graph the second time), so the time taken to remove edges that are not part of any cycle
is O(m). Since there are n vertices in the graph, it takes O(n) time to find a cycle (excluding the
time taken to traverse edges that are not part of any cycle), determine the maximum flow that
we can push through it, and update the flow in each of its edges. Since at least one edge of A is
saturated and removed every time we find a cycle, it follows that we find at most m cycles. Hence,
the total running time of the Cancel step is O(m + mn) = O(mn).
2.2.3 Overall Running Time
From the above analysis, we see that the Cancel step requires O(mn) time per iteration, whereas
the Tighten step only requires O(m) time per iteration. In the previous lecture, we determined
that the Cancel-and-Tighten algorithm requires O(min(n log(nC), mn log n)) iterations. Hence the
overall running time is O(min(mn2 log(nC), m2n2 log n)).
Over the course of the next few lectures, we will develop data structures that will enable us to
reduce the running time of a single Cancel step from O(mn) to O(m log n). Using dynamic trees, we
can reduce the running time | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
reduce the running time of a single Cancel step from O(mn) to O(m log n). Using dynamic trees, we
can reduce the running time of the Cancel step to an amortized time of O(log n) per cycle canceled.
This will reduce the overall running time to O(min(mn log(nC) log n, m2n log2 n)).
3 Binary Search Trees
In this section, we review some of the basic properties of binary search trees and the operations
they support, before introducing splay trees. A Binary Search Tree (BST) is a data structure that
maintains a dictionary. It stores a collection of objects with ordered keys. For an object (or node)
x, we use key[x] to denote the key of x.
Property of a BST. The following invariant must always be satisfied in a BST:
• If y lies in the left subtree of x, then key[y] ≤ key[x]
• If z lies in the right subtree of x, then key[z] ≥ key[x]
5-4
Operations on a BST. Here are some operations typically supported by a BST:
• Find(k): Determines whether the BST contains an object x with key[x] = k; if so, returns the
object, and if not, returns false.
• Insert(x): Inserts a new node x into the tree.
• Delete(x): Deletes x from the tree.
• Min: Finds the node with the minimum key from the tree.
• Max: Finds the node with the minimum key from the tree.
• Successor(x): Find the node with the smallest key greater than key[x].
• Predecessor(x): Find the node with the greatest key less than key[x].
• Split(x): Returns two BSTs: one containing all the nodes y where key[y] < key[x], and the
other containing all the nodes z where key[z] ≥ key[x].
• Join(T1, x, T2): Given two BSTs T1 and T2, where all the keys | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
1, x, T2): Given two BSTs T1 and T2, where all the keys in T1 are at most key[x], and
all the keys in T2 are at least key[x], returns a BST containing T1, x and T2.
For example, the procedure Find(k) can be implemented by traversing through the tree, and
branching to the left (resp. right) if the current node has key greater than (resp. less than) k. The
running time for many of these operations is linear in the height of the tree, which can be as high
as O(n) in the worst case, where n is the number of nodes in the tree.
A balanced BST is a BST whose height is maintained at O(log n), so that the above operations
can be run in O(log n) time. Examples of BSTs include Red-Black trees, AVL trees, and B-trees.
In the next lecture, we will discuss a data structure called splay trees, which is a self-balancing
BST with amortized cost of O(log n) per operation. The idea is that every time a node is accessed,
it gets pushed up to the root of the tree.
The basic operations of a splay tree are rotations. They are illustrated the following diagram.
5-5
ABCxyABCxyzig(rightrotation)zag(leftrotation) | https://ocw.mit.edu/courses/6-854j-advanced-algorithms-fall-2008/0d3338683064d96b5174095829043b93_lec5.pdf |
Lecture 04
Generalization error of SVM.
18.465
Assume we have samples z1 = (x1, y1), . . . , zn = (xn, yn) as well as a new sample zn+1. The classifier trained
on the data z1, . . . , zn is fz1,...,zn .
The error of this classifier is
Error(z1, . . . , zn) = Ezn+1 I(fz1,...,zn (xn+1) =�
yn+1) = Pzn+1 (fz1,...,zn (xn+1) =�
yn+1)
and the Average Generalization Error
A.G.E. = E Error(z1, . . . , zn) = EEzn+1 I(fz1,...,zn (xn+1) =�
yn+1).
Since z1, . . . , zn, zn+1 are i.i.d., in expectation training on z1, . . . , zi, . . . , zn and evaluating on zn+1 is the
same as training on z1, . . . , zn+1, . . . , zn and evaluating on zi. Hence, for any i,
A.G.E. = EEzi I(fz1,...,zn+1,...,zn (xi) =�
yi)
and
⎡
⎢
1
⎢
A.G.E. = E
⎢
n + 1
⎢
⎣
�
n+1
�
I(fz1,...,zn+1,...,zn (xi) =�
i=1
��
leave-one-out error
⎤
⎥
⎥
.
yi)
⎥
⎥
⎦
�
Therefore, to obtain a bound on the generalization ability of an algorithm, it’s enough to obtain a bound
on its leave-one-out error. We now prove such a bound for SVMs. Recall that the solution of | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/0d49e3d6b669cbbb13ef85b0e21357a8_l4.pdf |
-out error. We now prove such a bound for SVMs. Recall that the solution of SVM is
ϕ =
�
n+1 α0
i=1
i yixi.
Theorem 4.1.
L.O.O.E. ≤
min(# support vect., D2/m2)
n + 1
where D is the diameter of a ball containing all xi, i ≤ n + 1 and m is the margin of an optimal hyperplane.
Remarks:
•
•
1
dependence on sample size is n
1
dependence on margin is m
2
• number of support vectors (sparse solution)
6
+-++++++-----mLecture 04
Generalization error of SVM.
18.465
Lemma 4.1. If xi is a support vector and it is misclassified by leaving it out, then α0
1
i ≥ D
2 .
Given Lemma 4.1, we prove Theorem 4.1 as follows.
Proof. Clearly,
L.O.O.E. ≤
# support vect.
.
n + 1
Indeed, if xi is not a support vector, then removing it does not affect the solution. Using Lemma 4.1 above,
�
I(xi is misclassified) ≤
i∈supp.vect
�
0D2 = D2
αi
i∈supp.vect
In the last step we use the fact that
�
α0 = 1
m
2 . Indeed, since |ϕ| = 1 ,
m
i
�
0 =
αi
= |ϕ|2 = ϕ · ϕ = ϕ ·
�
α0
i yixi
1
2
m
�
0(yiϕ xi)
·
αi
=
=
=
�
�
�
α0
i
α0
i (yi(ϕ xi + b) − 1) +
·
��
0
� � | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/0d49e3d6b669cbbb13ef85b0e21357a8_l4.pdf |
i
α0
i (yi(ϕ xi + b) − 1) +
·
��
0
� �
0 − b
αi
α0
i yi
� �� �
0
�
D2
.
m2
�
We now prove Lemma 4.1. Let u ∗ v = K(u, v) be the dot product of u and v, and �u� = (K(u, u))1/2 be
, xn+1 ∈ Rd and y1,
the corresponding L2 norm. Given x1,
· · ·
· · ·
, yn+1 ∈ {−1, +1}, recall that the primal
1 �ψ�2 subject to yi(ψ ∗ xi + b) ≥ 1. Its dual
�
αi − 2
αiyixi� 2 subject to αi ≥ 0 and �
problem of training a support vector classifier is argminψ 2
1 � �
problem is argmaxα
Kuhn-Tucker condition can be satisfied, minψ 2 ψ ∗ ψ = maxα αi − 2 � αiyixi� = 2m2 , where m is the
margin of an optimal hyperplane.
i αi − �
Proof. Define w(α) = �
1
2 �
α� = argmaxαw(α) subject to αp = 0, αi ≥ 0 for i �= p and
· · ·
the support vector classifier trained from {(xi, yi) : i = 1,
αiyi = 0. Let
αiyi = 0. In other words, α0 corresponds to
, n+1} and α� corresponds to the support vector
αiyixi� 2. Let α0 = argmaxαw | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/0d49e3d6b669cbbb13ef85b0e21357a8_l4.pdf |
n+1} and α� corresponds to the support vector
αiyixi� 2. Let α0 = argmaxαw(α) subject to αi ≥ 0 and �
αiyi = 0, and ψ = �
αiyixi. Since the
�
�
�
1
2
1
1
classifier trained from {(xi, yi) : i = 1,
· · ·
, p − 1, p + 1,
· · ·
1
↓
· · · ,
,
, n + 1}. Let γ = ⎝0
p−1 p p+1
↓
, 1
↓
, 0
↓
0
⎛
· · · ,
,
n+1
↓
0
⎞
⎠. It
follows that w(α0 − α0 γ) ≤ w(α�) ≤ w(α0). (For the dual problem, α� maximizes w(α) with a constraint
p ·
that αp = 0, thus w(α�) is no less than w(α0 − α0 γ), which is a special case that satisfies the constraints,
including αp = 0. α0 maxmizes w(α) with a constraint αp ≥ 0, which raises the constraint αp = 0, thus
w(α�) ≤ w(α0). For the primal problem, the training problem corresponding to α� has less samples (xi, yi),
where i = p, to separate with maximum margin, thus its margin m(α�) is no less than the margin m(α0),
7
p ·
�
Lecture 04
Generalization error of SVM.
18.465
and w(α�) ≤ w(α0). On the other hand, the hyperplane determined by α0 − α0 γ might not separate (xi, yi) | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/0d49e3d6b669cbbb13ef85b0e21357a8_l4.pdf |
≤ w(α0). On the other hand, the hyperplane determined by α0 − α0 γ might not separate (xi, yi)
p ·
for i =�
p and corresponds to a equivalent or larger “margin” 1/�ψ(α0 − α0 γ)� than m(α�)).
p ·
Let us consider the inequality
max w(α� + t γ) − w(α�) ≤ w(α0) − w(α�) ≤ w(α0) − w(α0 − α0 γ).
·
p ·
t
For the left hand side, we have
�
w(α� + tγ) =
α� + t −
i
�
α�
i + t −
=
�2
�yixi + t · ypxp�
�
αi
��
��
1
�
2 �
�� �2
1
�
�yixi�
�
αi
�
2
�
�yixi
) ∗xp) −
α
i
�
��
ψ�
− t
αi
= w(α�) + t · (1 − yp · (
�
�yixi
�
∗ (ypxp) −
t2
2
�ypxp� 2
2
t
2
�xp� 2
and w(α� + tγ) − w(α�) = t (1 − yp ψ� ∗ xp) − t
2
t = (1 − yp · ψ� ∗ xp)/�xp�2, and
·
·
2
�xp� 2 . Maximizing the expression over t, we find
max w(α� + tγ) − w(α�) =
t
1 (1 − yp · ψ� ∗ xp)2
.
�xp�2
2
For the right hand side,
w(α0 − αp
0 · γ) =
=
�
0 − αp
αi
0 −
�
i − α0
α0
p −
1 �
� | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/0d49e3d6b669cbbb13ef85b0e21357a8_l4.pdf |
0 − αp
αi
0 −
�
i − α0
α0
p −
1 �
�
2
�
0 yixi −αp
αi
��
�
ψ0
0 ypxp�2
1
�ψ0�2 + αp
2
�
�2
α0
p
1
2
�xp�2
0 ypψ0 ∗ xp −
�
αp
0�2
1
2
�xp�2
= w(α0) − α0
p(1 − yp · ψ0 ∗ xp) −
1 �
2
= w(α0) −
�xp�2 .
0�2
αp
The last step above is due to the fact that (xp, yp) is a support vector, and yp · ψ0 ∗ xp = 1. Thus w(α0) −
1 �
w(α0 − α0 γ) = 1 �
p ·
2 and 1 (1−yp·ψ�∗xp)2
α0 �2
p �xp�
α0 �2
p �xp�
2 . Thus
≤ 2
2
�xp �2
2
α0
p ≥
� ∗ xp|
ψ
·
|1 − y
p
�2
x
�
p
≥
1
.
D2
The last step above is due to the fact that the support vector classifier associated with ψ� misclassifies (xp, yp)
�
according to assumption, and yp · ψ� ∗ xp ≤ 0, and the fact that �xp� ≤ D.
8 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/0d49e3d6b669cbbb13ef85b0e21357a8_l4.pdf |
Lecture 14
8.324 Relativistic Quantum Field Theory II
Fall 2010
8.324 Relativistic Quantum Field Theory II
MIT OpenCourseWare Lecture Notes
Hong Liu, Fall 2010
Lecture 14
We now consider the Lagrangian for quantum electrodynamics in terms of renormalized quantities.
L = −
B (γµ(∂µ − ieB AB
µ ) − mB )ψB
= −
− iZ2ψ(γµ∂µ − m − δm)ψ − Z2eAµψγµψ.
¯
¯
1
4 µν F − iψ¯
F B µν
B
1
Z3Fµν F µν
4
We know from previous lectures that there is no mass term for Aµ, that the bare and physical fields and couplings
are related by
√
AB
√
Z3Aµ,
µ =
ψB =
mB = m + δm,
Z2ψ,
eB =
√
1
Z3
e,
and that there is no renormalization for the gauge fixing term. These results are a consequence of gauge
symmetry, enforced through the Ward identities. We split the Lagrangian into three pieces:
L = L0 + L1 + Lct,
(1)
where we have
1
Fµν F µν − iψ¯(γµ∂µ − m)ψ,
L0 = −
4
L1 = −eAµψγµγ,
1
Lct = −
4
¯
−iZ2δm ¯
ψψ − (Z2 − 1)eAµ
¯
ψγµγ.
(Z3 − 1)Fµν F µν − i(Z2 − 1)ψ¯(γµ∂µ − m)ψ
L0 is the free Lagrangian, L1 is the interaction Lagrangian, and Lct is the counter-term Lagrangian. The
parameters Z3 | https://ocw.mit.edu/courses/8-324-relativistic-quantum-field-theory-ii-fall-2010/0d50ef39c801edcfae579fa8d371c85d_MIT8_324F10_Lecture14.pdf |
Lagrangian, L1 is the interaction Lagrangian, and Lct is the counter-term Lagrangian. The
parameters Z3 − 1, Z2 − 1 and δm are specified by the following renormalization conditions:
1
iϵ−�(k/) ,
1. For the spinor propagator, S(k) = ik/−m+
k/=−im = 0 (Physical mass condition),
= 0 (Physical field condition).
Σ|
(cid:12)
(cid:12)
dΣ
(cid:12)
(cid:12)
dk/ k/=−im
2. For the photon propagator, DT
P T
µ�
µν (k) = k2−iϵ 1−�(k2) ,
1
Π|
k=0 = 0 (Physical mass condition).
(2)
These three conditions allow us to fix our three parameters. We note that there is no need to introduce conditions
on vertex corrections, and so, L is written in terms of physically measured masses and couplings. From this
deconstruction, we acquire a set of Feynman rules for the interaction and counterterms in terms of the physical
propagators.
1
Lecture 14
8.324 Relativistic Quantum Field Theory II
Fall 2010
p
p
p
p
=
−igµν
,
k2 + iϵ
=
1
,
i/k − m + iϵ
= −ieγµ,
= −i(Z3 − 1)(k2 gµν − kµkν ) ∼ O(e 2),
= −i(Z2 − 1)(ik/ − m) + Z2δm ∼ O(e 2),
= −i(Z2 − 1)eγµ ∼ O(e 3).
3.2: VERTEX FUNCTION
Consider the effective vertex we defined before:
Γµ
phys
(k, k) =
µ
≡ −iephysγµ.
k
k
(3)
(4)
This is the physical vertex: it captures | https://ocw.mit.edu/courses/8-324-relativistic-quantum-field-theory-ii-fall-2010/0d50ef39c801edcfae579fa8d371c85d_MIT8_324F10_Lecture14.pdf |
µ
≡ −iephysγµ.
k
k
(3)
(4)
This is the physical vertex: it captures the full electromagnetic properties of a spinor interacting with a photon.
As we showed in the previous lecture, the Ward identities impose that
Γµ(k, k) = −ieγµ
(5)
Z3
eB being the physical charge. We note that in this case, q = 0, and so this is an
when k is on-shell, with e = �1
interaction with a static potential, measuring electric charge. We will now proceed to examine the general
structure of Γµ(k1, k2), with k1 and k2 on-shell. We will discuss the physical interpretation, and we will compute
2 = k2 = −m2, q2 = (k2 − k1)2 = 0, the process being described is
the one-loop correction explicitly. For general k1
an electron interacting a general external electromagnetic field. From Lorentz invariance, we can build Γµ from
γµ, k1
µ and k2
µ . Hence,
2
iΓµ(k1, k2) = γµA + i(k2
µ + kµ)B + (kµ − kµ)C,
2
1
1
(6)
where A, B, and C are 4 × 4 matrix functions of k1 and k2. But, since k1 and k2 are on-shell, and Γµ always
appears in a product as
u¯s′ (k2)Γµ(k1, k2)us(k1)
where us′ (k1) and ¯us(k2) are on-shell spinor wave functions, we can then simplify Γµ with the understanding that
it will always be found in this combination, using the on-shell spinor identities
(7)
ku/ s(k) = −imus(k),
u¯s(k)k/ = −imu¯s(k).
2
̸
Lecture 14
8.324 Relativistic Quantum Field Theory II
Fall 2010
Hence, A, B, and C are scal | https://ocw.mit.edu/courses/8-324-relativistic-quantum-field-theory-ii-fall-2010/0d50ef39c801edcfae579fa8d371c85d_MIT8_324F10_Lecture14.pdf |
ecture 14
8.324 Relativistic Quantum Field Theory II
Fall 2010
Hence, A, B, and C are scalars, and functions of the scalars k1
the Ward identities, we have that
2 , k2
2 and k1.k2, or, equivalently, of q2 and m. From
and, as ¯us′ (k2)/qus(k1) = 0, and qµ(k1
2
have C = 0. It is common to rewrite Γµ using the Gordon identity. Defining σµν = 2
i [γµ, γν ] , this result states
(8)
µ − kµ) = 0, only the term in C on the left-hand side is non-zero. We therefore
qµΓµ = 0,
u¯s′ (k2)γµus(k1) =
i
2m
u¯s′ (k2) [(k1
µ + k2
µ) + iqν σµν ] us(k1).
This allows us to exchange the term in B for a term in σµν .
Proof
u¯s′ (k2)γµus(k1) =
=
=
=
+
i
2m
i
2m
i
2m [(
i
2m
(
k2v + k1v
2
k2v + k1v +
2
[(
[¯us′ (k2)γµk/1us(k1) + ¯us′ (k2)k/2γµus(k1)]
k2v − k1v
2
)
)
u¯s′ (k2)γµγν us(k1)
−
k2v − k1v u¯s′ (k2)γv γµus(k1)
k2v − k1v
2
2
k2v + k1v
2
{γµ, γν
} −
)
(
]
u¯s′ (k2)
u¯s′ (k2) [(k2
µ + k1
µ) + iqvσµν ] us(k1). | https://ocw.mit.edu/courses/8-324-relativistic-quantum-field-theory-ii-fall-2010/0d50ef39c801edcfae579fa8d371c85d_MIT8_324F10_Lecture14.pdf |
)
u¯s′ (k2) [(k2
µ + k1
µ) + iqvσµν ] us(k1).
)
]
[γµ, γν ]
us(k1)
From this, we find that
[
γµF1(q 2) −
iΓµ(k1, k2) = e
]
σµν qν F2(q 2)
.
2m
(9)
�
(10)
F1(q2) and F2(q2) are known as form factors. We have that eF1(q2) = A + 2mB, and eF2(q2) = −2mB. Note that
the Ward identity means that F1(0) = 1 exactly.
3
MIT OpenCourseWare
http://ocw.mit.edu
8.324 Relativistic Quantum Field Theory II
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-324-relativistic-quantum-field-theory-ii-fall-2010/0d50ef39c801edcfae579fa8d371c85d_MIT8_324F10_Lecture14.pdf |
18.600: Lecture 38
Review: practice problems
Scott Sheffield
MIT
1I Compute the variance of X 2.
I If X1, . . . , Xn are independent copies of X , what is the
probability density function for the smallest of the Xi
Order statistics
I Let X be a uniformly distributed random variable on [−1, 1].
2I If X1, . . . , Xn are independent copies of X , what is the
probability density function for the smallest of the Xi
Order statistics
I Let X be a uniformly distributed random variable on [−1, 1].
I Compute the variance of X 2 .
3Order statistics
I Let X be a uniformly distributed random variable on [−1, 1].
I Compute the variance of X 2 .
I If X1, . . . , Xn are independent copies of X , what is the
probability density function for the smallest of the Xi
4I Note that for x ∈ [−1, 1] we have
P{X > x} =
dx =
Z 1
x
1
2
1 − x
.
2
If x ∈ [−1, 1], then
P{min{X1, . . . , Xn} > x}
= P{X1 > x, X2 > x, . . . , Xn > x} = (
1 − x
)n.
2
So the density function is
−
∂
∂x
(
1 − x
2
)n =
n
2
(
1 − x
2
)n−1.
4
.
45
Order statistics
—
answers
I
Z 1
−1
=
Var[X 2] = E [X 4] − (E [X 2])2
1
2
1
x 2dx)2 = − =
5
1
x 4dx − (
2
Z 1
1
9
−1
5Order statistics
—
answers
I
Var[X 2] = E [X 4] − (E [X 2])2
1
2
x 2dx)2
Z 1
1
5 | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/0db69ebc69650e5313b7624486ab79e4_MIT18_600F19_lec38.pdf |
4] − (E [X 2])2
1
2
x 2dx)2
Z 1
1
5
−
=
1
9
=
4
.
45
Z 1
−1
=
x 4dx − (
1
2
I Note that for x ∈ [−1, 1] we have
Z 1 1
2
P{X > x} =
−1
x
dx
=
1 − x
.
2
If x ∈ [−1, 1], then
P{min{X1, . . . , Xn} > x}
= P{X1 > x, X2 > x, . . . , Xn > x} = (
1 − x
2
)n .
So the density function is
∂ 1 − x
(
∂x
−
2
)n = (
n 1 − x
2
2
)n−1
.
6Moment generating functions
I Suppose that Xi are independent copies of a random variable
X . Let MX (t) be the moment generating function for X .
Compute the moment generating function for the average
P n
i=1 Xi /n in terms of MX (t) and n.
7Moment generating functions
—
answers
I Write Y =
Pn
i=1 Xi /n.
Then
MY (t) = E [e
tY ] = E [e t
n Xi /n] = (MX (t/n))n .
i=1
P
8I Compute H(X + Y ).
I Which is larger, H(X + Y ) or H(X , Y )? Would the answer to
this question be the same for any discrete random variables X
and Y ? Explain.
Entropy
I Suppose X and Y are independent random variables, each
equal to 1 with probability 1/3 and equal to 2 with probability
2/3.
I Compute the entropy H(X ).
9I Which is larger, H(X + Y ) or H(X , Y )? Would the answer to
this question be the same for any discrete random variables X
and Y ? Explain.
Entropy | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/0db69ebc69650e5313b7624486ab79e4_MIT18_600F19_lec38.pdf |
Y ) or H(X , Y )? Would the answer to
this question be the same for any discrete random variables X
and Y ? Explain.
Entropy
I Suppose X and Y are independent random variables, each
equal to 1 with probability 1/3 and equal to 2 with probability
2/3.
I Compute the entropy H(X ).
I Compute H(X + Y ).
10Entropy
I Suppose X and Y are independent random variables, each
equal to 1 with probability 1/3 and equal to 2 with probability
2/3.
I Compute the entropy H(X ).
I Compute H(X + Y ).
I Which is larger, H(X + Y ) or H(X , Y )? Would the answer to
this question be the same for any discrete random variables X
and Y ? Explain.
11I H(X + Y ) = 1
9 (− log 1
9 ) + 4
9 (− log 4
9 ) + 4
9 (− log 4
9 )
I H(X , Y ) is larger, and we have H(X , Y ) ≥ H(X + Y ) for any
X and Y . To see why, write a(x, y ) = P{X = x, Y = y } and
b(x, y ) = P{X + Y = x + y }. Then a(x, y ) ≤ b(x, y ) for any
x and y , so
H(X , Y ) = E [− log a(x, y )] ≥ E [− log b(x, y )] = H(X + Y ).
Entropy
—
answers
I H(X ) = (− log ) + (− log ).
1
3
2
3
2
3
1
3
12I H(X , Y ) is larger, and we have H(X , Y ) ≥ H(X + Y ) for any
X and Y . To see why, write a(x, y ) = P{X = x, Y = y } and
b(x, y ) = P{X + Y = x + y }. Then a(x, y ) ≤ b(x, y ) for any
x and y , so
H(X , Y ) = E [− log a(x, y | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/0db69ebc69650e5313b7624486ab79e4_MIT18_600F19_lec38.pdf |
, y ) ≤ b(x, y ) for any
x and y , so
H(X , Y ) = E [− log a(x, y )] ≥ E [− log b(x, y )] = H(X + Y ).
Entropy
—
answers
I
I
1
2
H(X ) = (− log ) +
(− log ).
3
3
1
4
H(X + Y ) = (− log
) + (− log
9
9
1
9
1
3
2
3
4
) + (− log )
9
4
9
4
9
13Entropy
—
answers
I
I
2
3
1
3
1
9
1
2
H(X ) = (− log ) +
(− log ).
3
3
1
4
H(X + Y ) = (− log
) + (− log
9
9
I H(X , Y ) is larger, and we have H(X , Y ) ≥ H(X + Y ) for any
X and Y . To see why, write a(x, y ) = P{X = x, Y = y } and
b(x, y ) = P{X + Y = x + y }. Then a(x, y ) ≤ b(x, y ) for any
x and y , so
H(X , Y ) = E [− log a(x, y )] ≥ E [− log b(x, y )] = H(X + Y ).
4
) + (− log )
9
4
9
4
9
14MIT OpenCourseWare
https://ocw.mit.edu
18.600 Probability and Random Variables
Fall 2019
For information about citing these materials or our Terms of Use, visit:
https://ocw.mit.edu/terms.
15 | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/0db69ebc69650e5313b7624486ab79e4_MIT18_600F19_lec38.pdf |
MIT 2.853/2.854
Manufacturing Systems
Introduction to Simulation
Lecturer: Stanley B. Gershwin
... with many slides by Jeremie Gallien
Copyright c(cid:13)2009 Stanley B. Gershwin.
Copyright c(cid:13)2009 Stanley B. Gershwin.
2
Slide courtesy of Jérémie Gallien. Used with permission.
What is
Simulation?
A computer simulation is a computer program ...
• that calculates a hard-to-calculate quantity using
statistical techniques; OR
• that models the behavior of a system by imitating
individual components of the system and their
interactions.
I am not entirely satisfied with this definition — it does
not seem restrictive enough. If all goes well, you will
know what a computer simulation is after this lecture.
Copyright c(cid:13)2009 Stanley B. Gershwin.
3
What is
Simulation?
By contrast, a mathematical model
is ...
• a mathematical representation of a phenomenon of
interest.
Computer programs are often used to obtain
quantitative information about the thing that is
modeled.
Copyright c(cid:13)2009 Stanley B. Gershwin.
4
Purposes
What is
Simulation?
Simulation is used ...
• for calculating hard-to-calculate quantities ...
⋆ from mathematics and science,
⋆ from engineering, especially system design.
• for developing insight into how a system operates,
• for demonstrating something to bosses or clients.
Copyright c(cid:13)2009 Stanley B. Gershwin.
5
What is
Simulation?
Types of simulation
• Static, for the evaluation of a quantity that is difficult
to evaluate by other means. The evolution of a
system over time is not the major issue.
• Dynamic, for the evaluation of quantities that arise
from the evolution of a system over time.
Copyright c(cid:13)2009 Stanley B. Gershwin.
7
Types of simulation
What is
Simulation?
• Static — Monte Carlo
• Dynamic
⋆ Discrete time
⋆ Discrete event
⋆ Solution of differential equations — I don’t consider
this simulation, but others do.
Copyright c(cid:13)2009 Stanley B. Gershwin.
7
What is
Simulation?
Types of simulation
Copyright c(cid:13)2009 Stanley | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
13)2009 Stanley B. Gershwin.
7
What is
Simulation?
Types of simulation
Copyright c(cid:13)2009 Stanley B. Gershwin.
8
Slide courtesy of Jérémie Gallien. Used with permission.
What is
Simulation?
Types of simulation
Copyright c(cid:13)2009 Stanley B. Gershwin.
9
Slide courtesy of Jérémie Gallien. Used with permission.
Discrete Time Simulation
Dynamic
Simulation
Appropriate for systems in which:
• Time is discrete.
⋆ If time in the real system is continuous, it is discretized.
• There is a set of events which have a finite number of possible
outcomes. For each event, the outcome that occurs is
independent of the other simultaneous events and of all past
events.
• There is a system state which evolves according to the events
that occur.
⋆ That is, the system is a discrete time Markov chain.
⋆ Often, other systems can be transformed into or
approximated by discrete time Markov chains.
Copyright c(cid:13)2009 Stanley B. Gershwin.
10
Dynamic
Simulation
Discrete Time Simulation
To model a random event that occurs with probability
p, let u be a pseudo-random number which is
distributed uniformly between 0 and 1.
• Generate u.
• If u ≤ p, the event has occured. Otherwise, it has
not.
Copyright c(cid:13)2009 Stanley B. Gershwin.
11
Dynamic
Simulation
Discrete Time Simulation
Two-Machine Line
PERL code excerpt, Machine 1:
for ($step=0; $step <= $T; $step++){
• if($alpha1[$step]==1)
⋆ {if (rand(1)<$p1){$alpha1[$step+1]=0}
else{$alpha1[$step+1]=1}}
• else
⋆ {if (rand(1)<$r1){$alpha1[$step+1]=1}
else{$alpha1[$step+1]=0}}
• if(($alpha1[$step+1]==1)&&($n[$step]<$N)) {$IU=1}
else{$IU=0}
Copyright c(cid:13)2009 Stanley B. Gershwin.
12
Dynamic
Simulation
Discrete Time Simulation
Two-Machine | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
0}
Copyright c(cid:13)2009 Stanley B. Gershwin.
12
Dynamic
Simulation
Discrete Time Simulation
Two-Machine Line
Machine 2 is similar, except:
if(($alpha2[$step+1]==1)&&($n[$step]>0)){$ID=1;$out++;}
else{$ID=0}
where $out is a counter for the number of parts
produced.
Buffer:
$n[$step+1]=$n[$step]+$IU-$ID;
Copyright c(cid:13)2009 Stanley B. Gershwin.
13
Dynamic
Simulation
Discrete Time Simulation
Two-Machine Line
• Advantage: easy to program
• Limitations:
⋆ geometric distributions, or others that can be
constructed from geometric distributions;
⋆ discrete time, or highly discretized continuous time
• Disadvantage: slow. Calculation takes place at every
time step.
Copyright c(cid:13)2009 Stanley B. Gershwin.
14
Dynamic
Simulation
Discrete Event Simulation
• Appropriate for systems in which:
⋆ Events occur at random times and have a finite set
of possible outcomes.
⋆ There is a system state that evolves according to
the events that occur.
• Time can be discrete or continuous. Discretization is
unnecessary.
• These conditions are much less restrictive than for
discrete time simulations.
Copyright c(cid:13)2009 Stanley B. Gershwin.
15
Discrete Event Simulation
Dynamic
Simulation
Method:
1. Starting at time 0, determine the next time of occurence for all
possible events. Some times will be obtained deterministically;
others from a random number generator.
2. Create an event list , in which all those events are sorted by
time, earliest first.
3. Advance the clock to the time of the first event on the list.
4. Update the state of the system according to that event.
Copyright c(cid:13)2009 Stanley B. Gershwin.
16
Dynamic
Simulation
Discrete Event Simulation
5. Determine if any new events are made possible by the change
in state, calculate their times (in the same way as in Step 1),
and insert them into the event list chronologically.
6. Determine if any events on the list are made impossible by the
current event. Remove them from the event list. Also, | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
event list chronologically.
6. Determine if any events on the list are made impossible by the
current event. Remove them from the event list. Also, remove
the current event from the list.
7. Go to Step 3.
Copyright c(cid:13)2009 Stanley B. Gershwin.
17
Dynamic
Simulation
Discrete Event Simulation
Two-Machine Line
Assume the system starts with both machines up, the
buffer with n > 1 parts in it. The clock is at t = 0.
• Possible next events: M1 failing, M2 failing. Pick times for both
events from the up-time distributions.
• Advance clock to the first event. Suppose it is M1 failing.
⋆ Possible next events: M2 failing (already on the list), buffer
emptying, M1 getting repaired.
⋆ Pick the time for M1 getting repaired from M1’s down-time
distribution. Calculate the time for the buffer emptying
deterministically from the current buffer level.
Copyright c(cid:13)2009 Stanley B. Gershwin.
18
Dynamic
Simulation
Discrete Event Simulation
Two-Machine Line
• Delete current event from the list, and advance the clock to the
next event. Assume it is the buffer emptying.
⋆ Now we have to adjust the list because M2 cannot fail while
the buffer is empty.
⋆ The only possible next event is the repair of M1. Pick the
time from the random number generator.
• etc.
Copyright c(cid:13)2009 Stanley B. Gershwin.
19
Dynamic
Simulation
Discrete-Event Algorithm
Two-Machine Line
Discrete Event Simulation
initialization()
• all variables are initialized
timing(event_list)
• next_eventis determined;
• sim_clock is updated.
event_occurence(next_event)
termination
test?
report_generator(stat_counters)
• system_stateis updated;
• stat_countersis updated;
• rv_generation()is used to
update event_list.
• based on number of iterations,
precision achieved, etc…
• statistical analysis, graph,
etc…
Copyright 2002 © Jérémie Gallien
Copyright c(cid:13)2009 Stanley B. Gershwin.
20
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
• Advantages:
Discrete Event Simulation
Two-Machine Line | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
• Advantages:
Discrete Event Simulation
Two-Machine Line
⋆ Fast. Calculation only occurs when an event
occurs.
⋆ General. The time until an event occurs can have
any distribution.
⋆ Commercial software widely available.
• Limitations: ????
• Disadvantage: Hard to program. (But see
“Advantages.”)
Copyright c(cid:13)2009 Stanley B. Gershwin.
21
Dynamic
Simulation
Random numbers
Random numbers are needed. This is not trivial
because a computer is inherently deterministic.
• They must be
⋆ distributed according to a specified distribution; and
⋆ independent.
• It is also very desirable to be able to reproduce the
sequence if needed — and not, if not needed.
Copyright c(cid:13)2009 Stanley B. Gershwin.
22
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
.
23
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Random numbers
Pseudo-random number generator
A pseudo-random number generator is a function F
such that for any number ω, there is a set of numbers
Z1(ω), Z2(ω), ... that satisfy
Zi+1(ω) = F (Zi(ω))
Z0(ω) = ω is the seed
and the sequence of Zi(ω) satisfies certain
conditions.
Copyright c(cid:13)2009 Stanley B. Gershwin.
24
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
25
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Random numbers
Pseudo-random number generator
• The sequence Zi(ω) is determined — deterministically — by
Z0(ω) = ω.
• A small change in ω causes a large change in Zi(ω).
• Since the user can specify the seed, it is possible to re-run a
simulation with the same sequence of random numbers. This is
sometimes useful for debugging, sensitivity analysis, etc.
• Often, the computer can choose the seed | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
simulation with the same sequence of random numbers. This is
sometimes useful for debugging, sensitivity analysis, etc.
• Often, the computer can choose the seed (for example, using
the system clock). This guarantees that the same sequence is
not chosen.
Copyright c(cid:13)2009 Stanley B. Gershwin.
26
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
27
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
28
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
29
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
30
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Random numbers
Pseudo-random number generator
Copyright c(cid:13)2009 Stanley B. Gershwin.
31
Slide courtesy of Jérémie Gallien. Used with permission.
Dynamic
Simulation
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
• The purpose of simulation is to get quantitative
performance measures such as production rate,
average inventory, etc.
• The data that comes from a simulation is statistical.
Therefore, to get useful numerical results from a
simulation,
⋆ there must be enough data; and
⋆ the data must be obtained under stationary
conditions ... although there are sometimes exceptions.
Copyright c(cid:13)2009 Stanley B. Gershwin.
32
Dynamic
Simulation
Enough data:
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
• The data is treated like measurement data from the real world.
Sample means, sample variances, confidence intervals, etc.
must be calculated.
• The more times the simulation is run (ie, the greater the
number of repetitions) the smaller the confidence intervals.
• The longer the simulation is run, the closer the results from
each run are to | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
number of repetitions) the smaller the confidence intervals.
• The longer the simulation is run, the closer the results from
each run are to one another, and consequently, the smaller the
confidence intervals.
• However, ...
Copyright c(cid:13)2009 Stanley B. Gershwin.
33
Dynamic
Simulation
Warmup:
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
• To evaluate average production rate, inventory, etc.,
the system must be in steady state.
• But the simulation will start with the system in a
known state, so it will not be in steady state
immediately.
• Some time is required to get the system in steady
state. This is the transient period.
characteristics of the system.
It depends on the
Copyright c(cid:13)2009 Stanley B. Gershwin.
34
Dynamic
Simulation
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
• Therefore, we should not collect data starting at time
0; we should start only after the system reaches
steady state.
• The initial period before data is collected is the
warmup period.
• Required: warmup period ≥ transient period.
• Problem: it is very hard to determine the transient
period.
Copyright c(cid:13)2009 Stanley B. Gershwin.
35
Dynamic
Simulation
Example:
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
• 20-machine line.
• ri = .1, pi = .02, i = 1, ..., 19; r20 = .1, p20 =
.025.
• Ni = 1000, i = 1, ..., 19.
• The simulation is run for 1,000,000 steps; data is
averaged over 10,000 steps.
Copyright c(cid:13)2009 Stanley B. Gershwin.
36
Dynamic
Simulation
Production rate
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
0.84
0.82
0.8
0.78
0.76
0.74
0.72
0.7
0
Initial transient
100000
200000
300000
400000
500000
600000
700000
800 | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
transient
100000
200000
300000
400000
500000
600000
700000
800000
900000
1e+06
Production Rate
Copyright c(cid:13)2009 Stanley B. Gershwin.
37
Dynamic
Simulation
Buffer levels
Technical Issues
Statistics, Repetitions,
Run Length, and Warmup
1000
900
800
700
600
500
400
300
200
100
0
0
100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06
Buffer 1
Buffer 10
Buffer 19
Copyright c(cid:13)2009 Stanley B. Gershwin.
38
Dynamic
Simulation
The Simulation Process
Technical Issues
1 Define the simulation goal
Statistics, Repetitions,
• Never skip!
Run Length, and Warmup
2 Model the system
• Keep the goal in mind!
• Customer feedback!
3 Preliminary data collection
• Rough estimates
4 Implement, debug and play
5 Sensitivity Analysis,
Data collection, Validation
6 Design & run experiment
7 Analyze and communicate
• Choice of tool is key
• Sensitivity analysis
• Never skip
• Customer feedback!
• Run length, warm start,
variance reduction…
• Use confidence intervals +
predictive accuracy!
Copyright 2003 © Jérémie Gallien
Slide courtesy of Jérémie Gallien. Used with permission.
Copyright c(cid:13)2009 Stanley B. Gershwin.
39
Dynamic
Simulation
Technical Issues
Typical Errors
Statistics, Repetitions,
Run Length, and Warmup
• Start modeling before knowing what
question you want to answer
• Constructing a model of the universe
• Not enough feedback from person
knowledgeable with system
• Collect data before knowing how your model
will use it
• Report results without understanding where
they come from/their qualitative
interpretation
Slide courtesy of Jérémie Gallien. Used with permission.
Copyright 2003 © Jérémie Gallien
Copyright c(cid:13)2009 Stanley B. Gershwin.
40
Dynamic
Simulation
Technical Issues
System Modeling | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
émie Gallien
Copyright c(cid:13)2009 Stanley B. Gershwin.
40
Dynamic
Simulation
Technical Issues
System Modeling
Statistics, Repetitions,
Run Length, and Warmup
• “Everything should be made as simple as possible,
but not simpler.” Albert Einstein.
• The simulation goal should be the guiding light when
deciding what to model
• Start to build your model ON PAPER!
• Get client/user feedback early, and maintain model +
assumption sheet for communication purposes
• Collect data and fit distributions… after modeling the
system, with sensitivity analysis in mind!
Slide courtesy of Jérémie Gallien. Used with permission.
Copyright 2003 © Jérémie Gallien
Copyright c(cid:13)2009 Stanley B. Gershwin.
41
MIT OpenCourseWare
https://ocw.mit.edu
2.854 / 2.853 Introduction To Manufacturing Systems
Fall 2016
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0dc16a918eabd3f47e3f6d573485dccf_MIT2_854F16_Simulation.pdf |
Lecture 2
(cid:40)(cid:88)(cid:70)(cid:79)(cid:76)(cid:71)(cid:72)(cid:68)(cid:81)(cid:3)(cid:36)(cid:79)(cid:74)(cid:82)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80)(cid:15)(cid:3)(cid:51)(cid:85)(cid:76)(cid:80)(cid:72)(cid:86)
Euclidean gcd Algorithm - Given a, b ∈ Z, not both 0, find (a, b)
• Step 1: If a, b < 0, replace with negative
• Step 2: If a > b, switch a and b
• Step 3: If a = 0, return b
• Step 4: Since a > 0, write b = aq + r with 0 ≤ r < a. Replace (a, b) with
(r, a) and go to Step 3.
Proof of correctness. Steps 1 and 2 don’t affect gcd, and Step 3 is obvious. Need
to show for Step 4 that (a, b) = (r, a) where b = aq + r. Let d = (r, a) and
e = (a, b).
d = (r, a) ⇒ d|a, d|r
⇒ d|aq + r = b
⇒ d|a, b
⇒ d|(a, b) = e
e = (a, b) ⇒ e|a, e|b
⇒ e|b − aq = r
⇒ e|r, a
⇒ e|(r, a) = d
Since d and e are positive and divide each other, are equal.
(cid:4)
Proof of termination. After each application of Step 4, the smaller of the pair (a)
strictly decreases since r < a. Since there are only finitely many non-negative
integers less than initial a, there can only be finitely many steps. (Note: because
it decreases by at least 1 at each step, this proof only shows a bound of O(a)
steps, when in fact the algorithm always finishes in time O(log(a)) ( | https://ocw.mit.edu/courses/18-781-theory-of-numbers-spring-2012/0ddadd3b4c7f386b2ae48a28b6f5ab47_MIT18_781S12_lec2.pdf |
each step, this proof only shows a bound of O(a)
steps, when in fact the algorithm always finishes in time O(log(a)) (left as
(cid:4)
exercise))
To get the linear combination at the same time:
1
1
1
2
5
43
27
16
11
5
1
0
43
1
0
1
-1
2
-5
⇒
27
0
1
-1
2
-3
8
1 = 5(43) + 8(27)
−
1(Definition) Prime number: A prime number is an integer p > 1 such that it
cannot be written as p = ab with a, b > 1.
Theorem 5 (Fundamental Theorem of Arithmetic). Every positive integer can be
written as a product of primes (possibly with repetition) and any such expression is
unique up to a permutation of the prime factors. (1 is the empty product, similar to 0
being the empty sum.)
Proof. There are two parts, existence and uniqueness.
Proof of Existence (by contradiction). Let set S be the set of numbers which cannot
be written as a product of primes. Assume S not empty, so it has a smallest
element n by WOP.
n = 1 not possible by definition, so n > 1. n cannot be prime, since if it were
prime it’d be a product with one term, and so wouldn’t be in S. So, n = ab with
a, b > 1.
Also, a, b < n so they cannot be in S by minimality of n, and so a and b are
the product of primes. n is the product of the two, and so is also a product of
primes, and so cannot be in S (
), and so S is empty.
(cid:32)
Proof of Uniqueness.
Lemma 6. If p is prime and p|ab, then p|a or p|b.
Proof. Assume p (cid:45) a, and let g = (p, a). Since p is prime, g = 1 or p, but can’t be
p because g|a and p (cid:45) a, so g = 1. Corollary from last class (4) shows that p|b | https://ocw.mit.edu/courses/18-781-theory-of-numbers-spring-2012/0ddadd3b4c7f386b2ae48a28b6f5ab47_MIT18_781S12_lec2.pdf |
g|a and p (cid:45) a, so g = 1. Corollary from last class (4) shows that p|b. (cid:3)
Corollary 7. If p|a1a2 . . . an, then p|ai for some i.
Proof. Obvious if n = 1, and true by lemma for n = 2. By induction, suppose
that it holds for n = k. Check for n = k + 1:
p| a
(cid:124)
1a2
. . . ak
(cid:125)
(cid:123)(cid:122)
A
a
+1
k
(cid:125)
(cid:123)(cid:122)
(cid:124)
B
p|AB ⇒
⇒ p|aifor some i by the induction hypothesis
p|A = p|a1a2 . . . ak
p|B ⇒ p|ak+1
And so we see that the hypothesis holds for n = k + 1 as well.
(cid:3)
2To prove uniqueness, say that we have n = p1p2 . . . pr = q1q2 . . . qs, which is the
smallest element in a set of counterexamples. We want to show that r = s and
p1p2 . . . pr is a permutation of q1q2 . . . qs.
p1|n = q1q2 . . . qs, so p1|qi for some i. Since p1 and qi are prime, p1 = qi. Cancel
to get p2 . . . pr = q1 . . . qi 1qi+1 . . . q
not
in the set of counterexamples by minimality of n, and so r − 1 = s − 1 and
p2 . . . pr is a permutation of q1 . . . qi 1qi+1 . . . qs, and so r = s and p1p .
2 . . pr is a
−
(cid:4)
permutation of q1q2 . . . qs. (
s. This number is less than n, and
so
−
)
(cid:32)
Theorem 8 (Euclid). There are infinitely many primes | https://ocw.mit.edu/courses/18-781-theory-of-numbers-spring-2012/0ddadd3b4c7f386b2ae48a28b6f5ab47_MIT18_781S12_lec2.pdf |
less than n, and
so
−
)
(cid:32)
Theorem 8 (Euclid). There are infinitely many primes
Proof by contradiction. Suppose there are finitely many primes p1, p2 . . . pn, with
n ≥ 1. Consider N = (p1p2 . . . pn) + 1. N > 1, and so by the Fundamental
Theorem of Arithmetic there must be a prime pi dividing N . Using Euclidean
gcd algorithm, (pi, (p1p2 . . . pn) + 1) = (pi, 1) = 1, and so pi (cid:45) N . So, p = pi for
(cid:4)
any i, and p is a new prime
.
(cid:32)
If you take first n primes
and compute an = (p1p2 . . . pn) + 1, it’s an
Note:
open problem whether all an (2, 3, 7, 31, 211, 2311, 30031 . . . ), are squarefree (no
repeated factors).
Theorem 9 (Euler). There are infinitely many primes
Proof (sketch) by contradiction. Suppose there are finitely many primes p1, p2,
. . . , pm. Then any positive integer n can be uniquely written as n = pe1
2 . . . pem
m
with e1, e
2 . . . em ≥ 0. Consider product:
1 pe2
(cid:18)
Σ = 1 +
1
p
1
+
1
2
p1
+
1
p3
1
(cid:18)
(cid:19) (cid:18)
. . .
1 +
+
where
1
1 + +
pi
+
1
p3
2
(cid:19)
1
p2
1
p2
i
1
p2
2
1
p3
i
+
. . .
=
1
−
1
1
pi
<
∞
(cid:19)
(cid:18)
. . . 1 +
. . .
1
pm
+
1
p2
m
(cid:19)
. . .
Since each term is a finite positive number, Σ is also a � | https://ocw.mit.edu/courses/18-781-theory-of-numbers-spring-2012/0ddadd3b4c7f386b2ae48a28b6f5ab47_MIT18_781S12_lec2.pdf |
+
1
p2
m
(cid:19)
. . .
Since each term is a finite positive number, Σ is also a finite positive number.
After expanding Σ, we can pick out any combination of terms to get
(cid:18)
. . .
1
pe1
1
(cid:19)
(cid:18)
. . .
. . .
1
pe2
2
(cid:19)
(cid:18)
. . .
. . .
. . .
1
pem
m
(cid:19)
. . . =
1
n
which means that Σ is the sum of the reciprocals of all positive integers. Since
all the terms are positive, we can rearrange the terms to get
1
Σ = + + . . .
2
1
1
1
3
1
n
· · · = lim Hn =
n→∞
∞
3(cid:54)
and so Σ diverges, which contradicts finiteness of Σ ( ).
(cid:32)
(cid:4)
Note: Euler’s proof shows that (cid:80)
1
p prime p diverges
Some famous conjectures about primes
Goldbach Conjecture
Every even integer > 2 is the sum of two primes
Twin Prime Conjecture
There are infinitely many twin primes (n, n + 2 both prime)
Mersenne Prime Conjecture
There are infinitely many Mersenne primes, ie., primes of the form 2n − 1.
Note: if 2n − 1 is prime, then n itself must be a prime.
4MIT OpenCourseWare
http://ocw.mit.edu
18.781 Theory of Numbers
Spring 2012
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-781-theory-of-numbers-spring-2012/0ddadd3b4c7f386b2ae48a28b6f5ab47_MIT18_781S12_lec2.pdf |
2.4 DNA structure
×
109 for human, to 9
DNA molecules come in a wide range of length scales, from roughly 50,000 monomers in a
1010 nucleotides in the lily. The latter would be around
λ-phage, 6
thirty meters long if fully stretched. If we consider DNA as a random (non-self avoiding)
Rp, coming to
chain of persistence length ξp ≈
approximately 0.2 mm in human. Excluded volume effects would further increase the extent
of the polymer. This is much larger than the size of a typical cell, and thus DNA within
cells has to be highly compactified. Eukaryotes organize DNA by wrapping the chain around
histone proteins (nucleosomes), which are then packed together.
×
50 nm, its typical size would be Rg ≈
p
L
·
At the microscopic level the double helix is held together through Watson–Crick pairs,
G–C and A–T, the former (with a binding energy of around 4kBT ) being roughly twice
as strong as the latter. At finite temperatures, this energy gain competes with the loss of
entropy that comes with braiding the two strands. Indeed at temperatures of around 80◦C
the double strand starts to unravel, denaturing (melting) into bubbles where the two strands
are apart. Regions of DNA that are rich in A–T open up at lower temperatures, those with
high G–C content at higher temperatures. These events are observed as separate blips in
ultraviolet absorption as a function of temperature for short DNA molecules, but overlap
and appear as a continuous curve in very long DNA.
There are software packages that predict the way in which a specific DNA sequence
unravels as a function of temperature. The underlying approach is the calculation of free
energies for a given sequence based on some model of the binding energies, e.g. by adding
energy gains from stacking successive Watson-Crick pairs. Another component is the gain in
entropy upon forming a bubble, which is observed experimentally to depend on the length l
of the denatured fragment as
S(l)
≈
bl + c log l + d , with c
1.8kB .
≈
(2.71)
The logarithmic dependence is a | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
�
bl + c log l + d , with c
1.8kB .
≈
(2.71)
The logarithmic dependence is a consequence of loop closure, and can be justified as follows.
The two single-stranded segments forming a bubble can be regarded as a loop of length 2l.
Let us consider an open polymer of length 2l, and the probability that the two end-points are
separated by a distance ~r. For non-interacting random walks the number of configurations of
length 2l and end-to-end separation ~r is easily obtained by appropriate extension of Eq. (2.40)
to
W (~r, 2l) = W (~r, l)2 = g2l
1 exp
(2.72)
dr2
2lξp (cid:21)
−
(cid:20)
1
(4πlξp/d)d ,
where we have further generalized to the case of random walks in d space dimensions. The
number of configurations of a bubble is now obtained by integrating over all positions of the
intermediate point as
Ω(l) =
ddrW (~r, 2l) =
Z
d
8πξp (cid:19)
(cid:18)
d/2 gl
lc ,
(2.73)
with g = g2
1 and c = d/2.
48
For the more realistic case of self-avoiding polymers, scaling considerations suggest
W (~r, 2l) =
gl
Rd Φ
~r
R
(cid:18)
(cid:19)
, with R
lν ,
∼
and Ω(l) =
gl
ldν .
(2.74)
We can understand this dependence by noting that in the absence of the loop closure con-
straint the end-point is likely to be anywhere in a volume of size roughly Rd
ldν, and
that brining the ends together reduces the number of choices by this volume factor. As we
shall see shortly, the parameter g is important in determining the value of the denaturation
temperature, while c controls the nature (sharpness) of the transition.
∝
2.4.1 The Poland–Scheraga model for DNA Denaturation
Strictly speaking, the denaturation of DNA can be regarded as a phase transition only | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
1 The Poland–Scheraga model for DNA Denaturation
Strictly speaking, the denaturation of DNA can be regarded as a phase transition only in
the limit where the number of monomers N is infinite. In practice, the crossover in behavior
occurs over an interval that becomes narrower with large N, so that it is sharp enough
106. We shall describe here
to be indistinguishable from a real singularity, say for N
a simplified model for DNA denaturation due to Poland and Scheraga1. Configurations of
partially melted DNA are represented in this model as as alternating sequence of double-
stranded segments (rods), and single-stranded loops (bubbles).
∼
Ignoring any interactions between the segments, each configuration is assigned a proba-
bility
p (l1, l2, l3,
) =
· · ·
R(l1)B(l2)R(l3)
Z
· · ·
,
(2.75)
where we have assumed that the first segment is a rod of length l1, the second a bubble
formed from two single strands of length l2, and so on. The double stranded segments are
energetically favored, but carry little entropy. To make analytical computations feasible, we
shall ignore the variations in binding energy for different nucleotides, and assign an average
energy ǫ < 0 per double-stranded bond. (In this sense, this is a model for denaturation of a
DNA homo-polymer.) The weight of a rod segment of length l is thus
R(l) = e−βǫl
≡
wl, where w = e−βǫ > 1 .
(2.76)
1D. Poland and H. A. Scheraga, “Phase transitions in one dimension and the helix-coil transition in
polyamino acids,” J. Chem. Phys. 45, 1456 (1966).
49
The single-stranded portions are more flexible, and provide an entropic advantage that is
modeled according to a weight similar to Eqs. (2.73-2.74), as
B(l) =
gl
lc .
(2.77)
Clearly the above weight cannot be valid for strands shorter than a persistence length, | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
74), as
B(l) =
gl
lc .
(2.77)
Clearly the above weight cannot be valid for strands shorter than a persistence length, but
better describes longer bubbles.
For DNA of length L the segment lengths are constrained such that
and the partition function, normalizing the weights in Eq. (2.75), is given by
l1 + l2 + l3 +
= L ,
· · ·
′
Z(L) =
wl1Ω(l2)wl3Ω(l4)
Xl1,l2,l3,...
,
· · ·
(2.78)
(2.79)
where the prime indicates the constraint in Eq. (2.78). As usual in statistical physics, such
a global constraint can be removed by moving to a different ensemble in which the number
of monomers is not fixed, but a DNA of length L is assigned an additional weight of zL.
(The quantity z is like a “fugacity,” related to a chemical potential µ by z = eβµ.) In such
an ensemble, the appropriate partition function is
(z) =
Z
∞
L=1
X
zLZ(L) .
(2.80)
Since L can now take any value, we can sum over the
constraint, to obtain
independently without any
li}
{
(z) =
Z
zl1wl1
Xl1
!
Xl2
zl2Ω(l2)
zl3wl3
!
Xl3
!
Xl4
zl4Ω(l4)
.
! · · ·
(2.81)
The result is thus a product of alternating contributions from rods and bubbles. For each
rod segment, we get a contribution
∞
R(z) =
(zw)l =
Xl=1
zw
zw
1
−
,
while the contribution from a bubble is
∞
∞
B(z) =
zlΩ(l) =
zlgl
lc ≡
f +
c (zg) .
Xl=1
The result for bubbles has been expressed in terms of the functions f +
n (x), frequently en-
countered in describing the ideal Bose gas in the grand canonical ensemble. We recall some
Xl=1
50
(2.82)
(2.83)
∞
Xl=1 | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
. We recall some
Xl=1
50
(2.82)
(2.83)
∞
Xl=1
f +
c (1)
≡
properties of these functions. First, note that taking the logarithmic derivative lowers the
index by one, as
z
df +
c (zg)
dz
=
(zg)l
lc−1 = f +
c−1(zg) .
(2.84)
Second, each f +
at which point
n (x) is an increasing function of its argument, and convergent up to x = 1,
ζc ,
(2.85)
where ζn is the well-known Riemann zeta-function. The zeta-function is well behaved for
c > 1, and indeed for c < 1, f +
x)c−1 for x
c (x) diverges is (1
1.2
Next, we must sum over all possible numbers of bubbles in between two rod segments as
→
−
end points, leading to
(z) = R(z) + R(z)B(z)R(z) + R(z)B(z)R(z)B(z)R(z) +
.
· · ·
(2.86)
Z
This is a just geometric series, easily summed to
(z) =
Z
1
−
R(z)
R(z)B(z)
=
1
R−1(z)
−
=
B(z)
(zw)−1
1
1
−
f +
c (zg)
−
.
(2.87)
The logarithm of the sum provides a useful thermodynamic free energy,
log
Z
(z) =
log
−
1
zw −
1
−
(cid:20)
f +
c (zg)
,
(cid:21)
(2.88)
from which we can extract physical observables. For example, while the length L is a
random variable in this ensemble, for a given z, its distribution is narrowly peaked around
the expectation value
= z
L
i
h
∂
∂z
log
Z
(z) =
1
zw + f +
1
c−1(zg)
f +
c (zg)
1
zw −
−
.
(2.89)
We can also compute the fraction of the polymer that is in the denatured state | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
g)
1
zw −
−
.
(2.89)
We can also compute the fraction of the polymer that is in the denatured state. Since
each double-strand bond contributes a factor w to the weight, the number of bound pairs
NB has a mean value
NBi
h
= w
∂
∂w
log
Z
(z) =
1
zw
−
1
zw −
1
.
f +
c (zg)
Taking the ratio of NB and L gives the fraction of the polymer in the native state as
Θ = h
h
L
i
NBi
=
1
c−1(zg)
1 + zwf +
.
(2.90)
(2.91)
2Furthering the mathematical analogy between DNA melting and Bose-Einstein condensation, note that
2. Indeed,
when the bubble is treated as a random walk, c = d/2, implying that B(z) is only finite for d
d = 2 is also a critical dimension for Bose-Einstein condensation.
≤
51
Equation (2.91) is not particularly illuminating in its current form, because it gives Θ in
terms of z, which we introduced as a mathematical device for removing the constraint of
length in the partition function. For meaningful physical results we need to solve for z as
a function of L by inverting Eq. (2.90). This task is simplified in the thermodynamic limit
where L, NB → ∞
, while their ratio is finite. From Eqs. (2.90-2.91), we see that this limit
is obtained by setting the denominator in these expressions equal to zero, i.e.
from the
condition
f +
c (zg) =
1
zw −
1 .
(2.92)
The type of phase behavior resulting from Eqs. (2.92-2.91), and the very existence of a
transition, depend crucially on the parameter c, and we can distinguish between the following
three cases:
(a) For c < 1, the function f +
c (zg) goes to infinity at z = 1/g. The right hand side of
Eq. (2.92) is a decreasing function of z that goes to zero at z = 1/w. We can graphically | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
side of
Eq. (2.92) is a decreasing function of z that goes to zero at z = 1/w. We can graphically
solve this equation by looking for the intersection of the curves representing these functions.
As temperature goes up, 1/w = eβǫ increases towards unity, and the intersection point
moves to the right. However, there is no singularity and a finite solution z < 1/g exists
at all temperatures. This solution can then be substituted into Eq. (2.91) resulting in a
native fraction that decreases with temperature, but never goes to zero. There is thus no
denaturation transition in this case.
c
≤
≤
2, the function f +
(b) For 1
intersect at this point for zc = 1/g and wc = g/(1 + ζc). For all values of w
fixed at 1/g. The derivative of f +
its argument approaches unity, such that
c (zg) reaches a finite value of ζc at zg = 1. The two curves
wc, z remains
c−1(zg) from Eq. (2.84), diverges as
c (zg), proportional to f +
≤
f +
c (zg)
ζc ∝
−
(1
−
zg)c−1 .
(2.93)
From the occurrence of f +
for w
c−1(zg) in the denominator of Eq. (2.91), we observe that Θ is zero
wc, i.e. the polymer is fully denatured. On approaching the transition point from
≤
52
the other side, Θ goes to zero continuously. Indeed, Eq. (2.93) implies that a small change
1
w
δw
c−1 .
−
≡
Since f +
zg)c−2, we conclude from Eq. (2.91) that the native fraction goes to
c−1(zg)
zero as
wc is accompanied by a much smaller change in z, such that δz
(zc−
(δw)
z)
(1
−
∝
≡
∝
(δz)2−c
Θ
∝
(w
−
∝
wc)β , with β =
.
(2.94 | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
δz)2−c
Θ
∝
(w
−
∝
wc)β , with β =
.
(2.94)
For a loop treated as a random walk in three dimensions, c = 3/2 and β = 1, i.e. the
2
c
c
1
−
−
≈
1/4 and a much sharper transition.
denatured fraction disappears linearly. Including self-avoidance with c = 3ν
β
(c) For c > 2, the function f +
c−1(zg) approaches a finite limit of ζc−1 at the transition point.
The transition is now discontinuous, with Θ jumping to zero from Θc = (1+ζc)/(1+ζc+ζc−1).
Including the effects of self-avoidance within a single loop increases the value of c from 1.5 to
1.8 leads to
≈
1.8. In reality there are additional effects of excluded volume between the different segments.
It has been argued that including the interactions between the different segments (single and
53
double-strands) further increases the value of c to larger than 2, favoring a discontinuous
melting transition.3
A justification of the role of the exponent c in controlling the nature/existence of the
phase transition can be gleaned by examining the behavior of a single bubble. Examining
the competition between entropy and energy suggests that the probability (weight) of a loop
of length ℓ = 2l is proportional to
p(ℓ)
∝
g
w
ℓ
1
ℓc .
×
(2.95)
(cid:16)
(cid:17)
1.
The probability broadens to include larger values of ℓ as (g/w)
(a) For c < 1, the above probability cannot be normalized if arbitrarily large values of ℓ are
included. Thus at any ration of (g/w), the probability has to be cut-off at some maximum
ℓ, and the typical size of a loop remains finite.
(b) For 1
normalization is f +
as (g/w)
(c) For c > 2, the probability is normalizable | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
��nite.
(b) For 1
normalization is f +
as (g/w)
(c) For c > 2, the probability is normalizable, and the loop size remains finite as (g/w)
There is a limiting loop size at the transition point suggesting a discontinuity.
2 the probability can indeed be normalized including all values of ℓ (the
c (g/w)), but the average size of the loop (related to f +
c−1(g/w)) diverges
1 signaling a continuous phase transition.
→
→
→
≤
≤
1.
c
3Y. Kafri, D. Mukamel, and L. Peliti, Phys. Rev. Lett. 85, 4988 (2000).
54
(cid:2)(cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:11)(cid:12)(cid:13)(cid:14)(cid:8)(cid:15)(cid:16)(cid:13)(cid:8)
(cid:17)(cid:18)(cid:18)(cid:7)(cid:19)(cid:20)(cid:20)(cid:11)(cid:21)(cid:22)(cid:23)(cid:24)(cid:25)(cid:18)(cid:23)(cid:8)(cid:26)(cid:12)
(cid:27)(cid:23)(cid:28)(cid:29)(cid:30)(cid:31)(cid:5)(cid:20)(cid:5) !(cid:4)(cid:23)"(cid:28)(cid:30)(cid:31)(cid:5)!(cid:18)(cid:16)(cid:18)(cid:25)(cid:14)(cid:18)(cid:25)(cid:21)(cid:16)#(cid:5)$(cid:17)%(cid:14)(cid:25)(cid:21)(cid:14)(cid:5)(cid:25)(cid:9)(cid:5)&(cid:25)(cid:11)#(cid:11)’%
!(cid:7)(cid:13)(cid:25)(cid:9)’(cid:5)(cid:30)())
*(cid:11)(cid:13)(cid:5)(cid:25)(cid | https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/0df04225a5ce78faa6d53d8a3167c3ea_MIT8_592JS11_lec16.pdf |
Normal equations
Random processes
6.011, Spring 2018
Lec 15
1
Zillow (founded 2006)
© Zillow. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/
2Zestimates
© Zillow. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/
3
LMMSE for multivariate case
min
a0,...,aL
E[(Y
L
a0 +⌃ j=1 ajXj | https://ocw.mit.edu/courses/6-011-signals-systems-and-inference-spring-2018/0df9fff5e1e92239e0f755a893117b91_MIT6_011S18lec15.pdf |
6.897: Advanced Topics in Cryptography
Lecturer: Ran Canetti
Focus for first half (until Spring Break):
Foundations of cryptographic protocols
Goal: Provide some theoretical foundations of secure
cryptographic protocols:
• General notions of security
• Security-preserving protocol composition
• Some basic constructions
Overall:
Definitional and foundational slant
(but also constructions, and even some efficient ones…)
Notes
• Throughout, will try to stress conceptual
points and considerations, and will spend
less time on technical details.
• Please interrupt me and ask lots of questions
– both easy and hard!
• The plan is only a plan, and is malleable…
Lecture plan
Lecture 1 (2/5/4): Overview of the course. The definitional
framework of “classic” multiparty function evaluation
(along the lines of [C00]): Motivation for the ideal-model
paradigm. The basic definition.
Lecture 2 (2/6/4): Variants of the basic definition.
Non-concurrent composition.
Lecture 3 (2/12/4): Example: Casting Zero-Knowledge
within the basic definitional framework. The Blum
protocol for Graph Hamiltonicity. Composability of Zero-
Knowledge.
Lecture 4 (2/13/4): The universally composable (UC)
security framework: Motivation and the basic definition
(based on [C01]).
Lectures 5,6 (2/19-20/4): No lecture (TCC)
Lecture 7 (2/26/4): Alternative formulations of UC security. The
universal composition theorem. Survey of feasibility results in
the UC framework. Problem Set 1.
Lecture 8 (2/27/4): UC commitments: Motivation. The ideal
commitment functionality. Impossibility of realizations in the
plain model. A protocol in the Common Reference String
(CRS) model (based on [CF01]).
Lecture 9 (3/4/4): The multi-commitment functionality and
realization. UC Zero Knowledge from UC commitments.
Universal composition with joint state. Problem Set 1 due.
Lecture 10 (3/5/4): Secure realization of any multi-party
functionality with any number of faults (based on | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
due.
Lecture 10 (3/5/4): Secure realization of any multi-party
functionality with any number of faults (based on
[GMW87,G98,CLOS02]): The semi-honest case. (Static,
adaptive, two-party, multi-party.)
Lecture 11 (3/11/4): Secure realization of any multi-party
functionality with any number of faults: The Byzantine case.
(Static, adaptive, two-party, multi-party.) The case of honest
majority.
Lecture 12 (3/12/4): UC signatures. Equivalence with existential
unforgeability against chosen message attacks (as in
[GMR88]). Usage for certification and authentication.
Lecture 13 (3/18/4): UC key-exchange and secure channels.
(Based on [CK02]).
Lecture 14 (3/19/4): UC encryption and equivalence with security
against adaptive chosen ciphertext attacks (CCA). Replayable
CCA encryption. (Based on [CKN03].) Problem Set 2.
Potential encore: On symbolic (“formal-methods”) analysis of
cryptographic protocols.
Scribe for today?
What do we want from a definition of security for a
given task?
• Should be mathematically rigorous
(I.e., should be well-defined how a protocol is modeled
and whether a given protocol is “in” or “out”).
• Should provide an abstraction (“a primitive”) that matches
our intuition for the requirements of the task.
• Should capture “all realistic attacks” in the expected
execution environment.
• Should guarantee security when the primitive is needed
elsewhere.
• Should not be over-restrictive.
• Should be based on the functionality of the candidate
protocol, not on its structure.
• Nice-to-haves:
– Ability to define multiple tasks within a single framework.
– Conceptual and technical simplicity.
What do we want from a definition of security for a
given task?
• Should be mathematically rigorous
(I.e., should be well-defined how a protocol is modeled
and whether a given protocol is “in” or “out”).
• Should provide an abstraction (“a primitive”) that matches
our intuition for the requirements of the task.
• Should capture “all realistic attacks” in the expected | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
an abstraction (“a primitive”) that matches
our intuition for the requirements of the task.
• Should capture “all realistic attacks” in the expected
execution environment.
• Should guarantee security when the primitive is needed
elsewhere.
• Should not be over-restrictive.
• Should be based on the functionality of the candidate
protocol, not on its structure.
• Nice-to-haves:
– Ability to define multiple tasks within a single framework.
– Conceptual and technical simplicity.
What do we want from a definition of security for a
given task?
• Should capture “all realistic attacks” in the expected
execution environment. Issues include:
– What are the network characteristics? (synchrony, reliability, etc.)
– What are the capabilities of the attacker(s)? (controlling protocol
participants? The communication links?In what ways? )
– What are the possible inputs?
– What other protocols are running in the same system?
• Should guarantee security when the primitive is needed
elsewhere:
– Take a protocol that assumes access to the “abstract primitive”,
and let it work with a protocol that meets the definition. The overall
behavior should remain unchanged.
(cid:206) Some flavor of “secure composability” is needed already in
the basic desiderata.
First candidate: The “classic” task of
multiparty secure function evaluation
• We have:
– n parties, p1…pn, n>1, where each p has an input value x in D.
i
i
Some of the parties may be corrupted. (Let’s restrict ourselves to static
corruptions, for now.)
– A probabilistic function f:Dn x R (cid:198) Dn .
– An underlying communication network
• Want to design a “secure” protocol where each p has output
i
f(x1…xn,,r) .That is, want:
– Correctness: The honest parties get the correct function value of the
i
parties’ inputs.
– Secrecy: The corrupted parties learn nothing other than what is
computable from their inputs and prescribed outputs.
Examples:
• F(x1 ,…,xn ) = x1 +…+xn
• F(x1 ,…,xn ) = max(x1 +… | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
• F(x1 ,…,xn ) = x1 +…+xn
• F(x1 ,…,xn ) = max(x1 +…+xn)
• F(-,…,- ) = r U D
• F((x0,x1),b)= (-,xb) (b in {0,1})
• FR((x,w),-) = (-,(x,R(x,w)) (R(x,w) is a binary relation)
…
• But, cannot capture “reactive” tasks (e.g.,
commitment, signatures, public-key encryption…)
How to formalize?
How to define correctness?
Question: Based on what input values for the corrupted parties
i
i
should the function be computed?
(ie, recall: P should output f(x1…xn,,r) . But what should be
the x’s of the corrupted parties?)
–
If we require that f is computed on input values fixed from above then
we get an unrealizable definition.
If we allow the corrupted parties to choose their inputs then we run into
problems.
Example:
Function: f(x1,x2)=(x1+x2,, x1+x2).
Protocol: P1 sends x1 to P2 . P2 sends x1+x2,back.
The protocol is both “correct” and “secret”. But it’s not secure…
–
(cid:206) Need an “input independence” property, which blends secrecy and
correctness…
How to formalize?
How to define secrecy?
An attempt: “It should be possible to generate the view of the
corrupted parties given only their inputs and outputs.”
Counter example:
Function: F(-,- ) = (r U D,-)
Protocol: P1 chooses r U D, and sends r to P2 .
The protocol is clearly not secret (P2 learns r). Yet, it is possible to generate
P2 ‘s view (it’s a random bit).
(cid:206) Need to consider the outputs of the corrupted parties together with the
outputs of the uncorrupted parties. That is, correctness and secrecy are
again intertwined.
The general definitional approach
[Goldreich-Micali-Wigderson87]
‘A protocol is secure for some task if it “emulates” an
“ideal setting” where | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
reich-Micali-Wigderson87]
‘A protocol is secure for some task if it “emulates” an
“ideal setting” where the parties hand their inputs to
a “trusted party”, who locally computes the desired
outputs and hands them back to the parties.’
•
•
Several formalizations exist (e.g. [Goldwasser-Levin90,
Micali-Rogaway91, Beaver91, Canetti93, Pfitzmann-Waidner94,
Canetti00, Dodis-Micali00,…])
I’ll describe the formalization of [Canetti00]
(in a somewhat different presentation).
Presenting the definition:
• Describe the model for protocol execution
(the “real life model”).
• Describe the ideal process for evaluating a function
with a trusted party.
• Describe the notion of “emulating an ideal process”.
I’ll describe the definition for the case of:
• Synchronous networks
• Active (Byzantine) adversary
• Static (non-adaptive) adversary
• Computational security (both adversary and
distinguisher are polytime)
• Authenticated (but not secret) communication
Other cases can be inferred…
Some preliminaries:
• Distribution ensembles:
A distribution ensemble D = {Dk,a} (k in N, a in {0,1}*)
is a sequence of distributions, one for each value of k,a .
We will only consider binary ensembles,
i.e. ensembles where each Dk,a is over {0,1}.
• Relations between ensembles:
– Equality: D=D’ if for all k,a, Dk,a = D’k,a .
– Statistical closeness: D~D’ if for all c,d>0 there is a k0
such that for all k> k0 and all a with |a|< kd we have
Prob[xDk,a, x=1] - Prob[xD’k,a, x=1] < k-c .
• Multiparty functions:
An n-party function is a function f:N x R x ({0,1}*)n+1(cid:198) ({0,1}*)n+1
•
Interactive Turing machines (ITMs):
An | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
,1}*)n+1(cid:198) ({0,1}*)n+1
•
Interactive Turing machines (ITMs):
An ITM is a TM with some special tapes:
Incoming communication tape
Incoming subroutine output tape
Identity tape, security parameter tape
–
–
–
An activation of an ITM is a computation until a “waiting” state is reached.
• Polytime ITMs:
An ITM M is polytime if at any time the overall number of steps taken is
polynomial in the security parameter plus the overall input length.
• Systems of interacting ITMs (Fixed number of ITMs):
– A system of interacting ITMs is a set of ITMs, one of them the initial one,
plus a set of “writing permissions”.
– A Run of a system (M0 …Mm) :
In each activation an ITM may write to tapes of other ITMs.
• The initial ITM M0 starts with some external input.
•
• The ITMs whose tapes are written to enter a queue to be activated next .
• The output is the output of the initial ITM M0.
• Multiparty protocols:
An n-party protocol is a sequence of n ITMs, P=(P1 …Pn).
The “real-life model” for protocol execution
A system of interacting ITMs:
• Participants:
– An n-party protocol P=(P1 …Pn). (any n>1)
– Adversary A, controlling a set B of “bad parties” in P.
(ie, the bad parties run code provided by A)
– Environment Z (the initial ITM)
• Computational process:
– Z gets input z
– Z gives A an input a and each good party P an input x
– Until all parties of P halt do:
i
i
• Good parties generate messages for current round.
• A gets all messages and generates messages of bad parties.
• A delivers the messages addressed to the good parties.
– Before halting, A and all parties write their outputs on Z’s
subroutine output tape.
– Z generates an output bit b in {0,1}.
• Notation:
– EXECP,A,Z | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
output tape.
– Z generates an output bit b in {0,1}.
• Notation:
– EXECP,A,Z (k,z,r) : output of Z after above interaction with P,A,
on input z and randomness r for the parties with s.p. k.
(r denotes randomness for all parties, ie, r= rZ ,rA ,r1 …rn.)
– EXECP,A,Z (k,z) : The output distribution of Z after above
interaction with P,A, on input z and s.p. k, and uniformly chosen
randomness for the parties.
– EXECP,A,Z :
The ensemble of distributions {EXECP,A,Z (k,z)} (k in N, z in {0,1}*)
The ideal process for evaluation of f:
Another system of interacting ITMs:
• Participants:
– “Dummy parties” P1 …Pn.
– Adversary S, controlling the “bad parties” P in B.
– Environment Z
– A “trusted party” F for evaluating f
i
• Computational process:
– Z gets input z
– Z gives S an input a and each good party P an input x
– Good parties hand their inputs to F
– Bad parties send o F whatever S says. In addition, S sends its own input.
– F evaluates f on the given inputs (tossing coins if necessary) and hands
each party and S its function value. Good parties set their outputs to this
value.
i
i
– S and all parties write their outputs on Z’s subroutine output tape.
– Z generates a bit b in {0,1}.
• Notation:
– IDEALf
S,Z (k,z,r) : output of Z after above interaction with
F,S, on input z and randomness r for the parties with s.p. k.
(r denotes randomness for all parties, ie, r= rZ ,rS ,rf.)
– IDEALf
S,Z (k,z) : The output distribution of Z after above
interaction with f,S, on input z, s.p. k, and uniform
randomness for the parties.
– IDEALf
S,Z:
The ensemble {IDEALf
S | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
k, and uniform
randomness for the parties.
– IDEALf
S,Z:
The ensemble {IDEALf
S,Z (k,z)} (k in N, z in {0,1}*)
• Notation:
– Let B be a collection of subsets of {1..n}. An adversary is
B-limited if the set B of parties it corrupts is in B.
Definition of security:
Protocol P B-emulates the ideal process for f if
for any B-limited adversary A there exists an adversary S
such that for all Z we have:
In this case we say that protocol P B-securely realizes f.
IDEALf
S,Z ~ EXECP,A,Z .
In other words: “Z cannot tell with more than negligible probability
whether it is interacting with A and parties running P, or with S
and the ideal process for f.”
Or: “whatever damage that A can do to the parties running the
protocol can be done also in the ideal process.”
This implies:
• Correctness: For all inputs the good parties output the
“correct function value” based on the provided inputs
• Secrecy:
Whatever A computes can be computed
•
given only the prescribed outputs
Input independence: The inputs of the bad parties are
chosen independently of the inputs of the good
parties.
Equivalent formulations:
• Z outputs an arbitrary string (rather than one bit) and Z’s
outputs of the two executions should be indistinguishable.
• Z, A are limited to be deterministic.
• Change order of quantifiers: S can depend on Z.
Variants
• Passive (semi-honest) adversaries: The corrupted parties
continue running the original protocol.
• Secure channels, unauthenticated channels:
Change the “real-life” model accordingly.
• Unconditional security: Allow Z, A to be computationally
unbounded. (S should remain polynomial in Z,A,P,
otherwise weird things happen…)
• Perfect security: Z’s outputs in the two runs should be
identically distributed.
• Adaptive security: Both A and S can corrupt parties as the
computation proceeds. Z learns about corruptions.
Some caveats:
– What information is disclosed upon corruption?
– For composability, A and Z can talk at each corruption. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/0dfd54053823c507a9a117d910d31ff5_lecture1_2.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.