text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
IV.E Perturbative RG (First Order)
The last section demonstrated how various expectation values associated with the
Landau–Ginzburg Hamiltonian can be calculated perturbatively in powers of u. However,
the perturbative series is inherently divergent close to the critical point and cannot be used
to characterize critical behavior in dimensions d ≤ 4. Wilson showed that it is possible to
combine perturbative and renormalization group approaches into a systematic method for
calculating critical exponents. Accordingly, we shall extend the RG calculation of Gaussian
ddxm4 as a
model in sec.III.G to the Landau–Ginzburg Hamiltonian, by treating U = u
perturbation.
�
1. Coarse Grain: This is the most difficult step of the RG procedure. As before, subdivide
the fluctuations into two components as,
~˜m(q)
for 0 < q < Λ/b
~σ(q)
for Λ/b < q < Λ
m(q)
~
=
In the partition function,
.
(IV.28)
Z =
�
Dm˜ (q)D~
~
σ(q) exp −
�
0
�
Λ
ddq
(2π)d
t + Kq2
2
�
�
�
m(q)|2
| ˜
+ |σ(q)|2
− U[m˜ (q), ~
~
σ(q)]
,
�
�
(IV.29)
the two sets of modes are mixed by the operator U . Formally, the result of integrating out
{~σ(q)} can be written as
Dm˜ (q) exp
~
−
Z =
�
exp −
�
nV
2
� | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
Dm˜ (q) exp
~
−
Z =
�
exp −
�
nV
2
�
0
�
Λ ddq
Λ/b (2π)d
�
Λ/b
ddq
(2π)d
t + Kq2
2
�
m(q)|2
| ˜
×
�
�
(IV.30)
ln
t + Kq2
−U [m,~~˜ σ]
e
�
�
� �
≡
�
σ
�
Dm˜ (q)e
~
−βH˜ [ ~˜
m]
.
Here we have defined the partial averages
hOiσ ≡
�
q)
σ(
D~
Z
σ
O exp −
�
Λ
d
Λ/b (2
�
dq
)d
π
t +
Kq2
2
�
|σ(q)|2
,
�
�
(IV.31)
with Zσ = D~σ(q) exp{−βH0[~σ]}, being the Gaussian partition function associated with
the short wavelength fluctuations. From eq.(IV.30), we obtain
�
β˜H[ ~˜ = V δfb
m]
0 +
Λ/b
d
(2
dq
)d
π
t +
Kq2
2
�
�
� 0
m(q)|2 − ln
| ˜
−U [ ~˜ σ]
m,~
e
�
(IV.32)
.
σ
�
59
The final expression can be calculated perturbatively as,
ln
−U
e
�
σ
�
= − hU iσ +
1
2
U 2 − hUi2
σ
σ
+· · · +
�
�
�
�
(−1)ℓ
ℓ!
×ℓth cumulant of U +· · · . (IV | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
(−1)ℓ
ℓ!
×ℓth cumulant of U +· · · . (IV.33)
The cumulants can be computed using the rules set in the previous sections. For example,
at the first order we need to compute
= u
U m, ~˜ σ
~
��
� �
~˜
m(q1) + ~
σ
�
· m˜ (q2) + ~
~
σ(q2)
σ(q1)
ddq1ddq2ddq3ddq4
(2π)4d
(2π)dδd(q1 + q2 + q3 + q4)
m˜ (q3) + ~
~
σ(q3) m˜ (q4) + ~
· ~
σ(q4)
��
� �
� �
� �
The following types of terms result from expanding the product:
.
(IV.34)
σ
��
[1]
[2]
[3]
[4]
[5]
[6]
1
4
2
4
4
1
m˜ (q1) · ~˜
~
~˜
m(q2) m(q3) ~
· m˜ (q4)
�
σ(q1) · ~˜
~
~˜
m(q2) m(q3) ~
· m˜ (q4)
�
σ(q1) · ~
~
~˜
σ(q2) m(q3) ~
· m˜ (q4)
�
σ(q1) m˜ (q2) ~
· ~
~
· ~˜
σ(q3) m(q4)
�
σ(q1) ~
~
· σ(q2) ~
σ(q3) ~˜
· m(q4)
�
�
h~σ(q1) · ~σ(q2) ~σ(q3) · ~σ(q4)iσ
σ
�
σ
�
σ | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
) · ~σ(q2) ~σ(q3) · ~σ(q4)iσ
σ
�
σ
�
σ
(IV.35)
.
σ
σ
�
�
The second element in each line is the number of terms with the a given ‘symmetry’.
The total of these coefficients is 24 = 16. Since the averages hOiσ, involve only the short
wavelength fluctuations, only contractions with ~σ appear. The resulting internal momenta
are integrated from Λ/b to Λ.
Term [1] has no ~σ factors and evaluates to U[m˜ ]. ~ The second and fifth terms involve
an odd number of ~σs and their average is zero. Term [3] has one contraction and evaluates
to
− u × 2
− 2nu
ddq1 · · · ddq4 (2π)dδd(q1 + · · ·
(2π)4d
+ q4)
δjj (2π)dδd(q1 + q2)
2
t + Kq1
m˜ (q3) · m˜ (q4)
~
~
=
|m˜ (q)|2
Λ ddk
Λ/b (2π)d t + Kk2
1
.
�
(IV.36)
�
Λ/b ddq
(2π)d
0
�
Term [4] also has one contraction but there is no closed loop (the factor δjj) and hence no
factor of n. The various contractions of 4 ~σ in term [6] lead to a number of terms with
60
no dependence on ~˜ We shall denote the sum of these terms by uV δ | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
no dependence on ~˜ We shall denote the sum of these terms by uV δfb
1 .
terms, the coarse grained Hamiltonian at order of u is given by
m.
Summing up all
β˜H[ ~˜ =V
m]
δf 0
1
b + uδfb
Λ/b
+
d
(2
dq
)d
π
t˜+
Kq2
2
m(q)|2
| ˜
0
�
�
Λ/b ddq1
dd
q
2
)3
π
(2
d
�
+ u
0
�
�
�
ddq3 m~˜ (q1) · m~˜ (q2)m~˜ (q3) · m˜ (−q1 − q2 − q3)
~
,
(IV.37)
where
Λ ddk
Λ/b (2π)d t + Kk2
�
The coarse grained Hamiltonian is thus described by the same 3 parameters t, K, and u.
t˜ = t + 4u(n + 2)
(IV.38)
1
.
The other two parameters in the coarse grained Hamiltonian are unchanged, i.e.
K˜ = K,
and u˜ = u.
(IV.39)
2. Rescale by setting q = b−1q ′ , and
3. Renormalize, m˜ = z ~ ′ , to get
~
m
(βH)
′
′
[m
] =V
δfb
1
0 + uδfb
�
+ uz
4b
−3d
0
�
Λ
+
′
dd
q
d b
(2
)
π
0
�
�
′
′ ddq3
′ ddq2 | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
dd
q
d b
(2
)
π
0
�
�
′
′ ddq3
′ ddq2
Λ ddq1
(2π)3d
−d z 2
�
′
(q
1)
′
m
~
t˜+ Kb
2
−2q ′2
′
|m
(q ′
)|2
�
.
′
m
· ~
(q ′
′
2) ~
m
′
(q
3)
′
m
· ~
(−q ′
′
1 − q2 − q3)
′
(IV.40)
The renormalized Hamiltonian is characterized by the triplet of interactions (t ′, K ′ , u ′ ),
such that
′
t
= b
−d z 2t, ˜ K
′
= b
−d−2 z 2K,
′
u
= b
−3d z 4 u.
(IV.41)
As in the Gaussian model there is a fixed point at t ∗ = u ∗ = 0, provided that we set
b1+ d
2 , such that K = K. The recursion relations for t and u in the vicinity of this
′
z =
point are given by
′
tb
= b2
�
t + 4u(n + 2)
′
ub = b4−d u
Λ ddk
Λ/b (2π)d t + Kk2
�
1
�
.
(IV.42)
While the recursion relation for u at this order is identical to that obtained by dimensional
analysis; the one for t is different. It is common to convert the discrete recursion relations | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
for t is different. It is common to convert the discrete recursion relations
to continuous differential equations by setting b = eℓ, such that for an infinitesimal δℓ,
′
b ≡ t(b) = t(1 + δℓ) = t + δℓ
t
dt
dℓ
+ O(δℓ2)
,
′
b ≡ u(b) = u + δℓ
u
du
dℓ
+ O(δℓ2).
61
Expanding eqs.(IV.42) to order of δℓ, gives
t + δℓ
u + δℓ
dt
dℓ
du
dℓ
= (1 + 2δℓ)
t + 4u(n + 2)
�
= (1 + (4 − d)δℓ) u
Sd
1
(2π)d t + KΛ2
Λdδℓ
�
.
(IV.43)
The differential equations governing the evolution of t and u under rescaling are then
dt
dℓ
du
dℓ
= 2t +
4u(n + 2)KdΛd
t + KΛ2
= (4 − d)u
.
(IV.44)
The recursion relation for u is easily integrated to give u(ℓ) = u0e(4−d)ℓ = u0b(4−d).
The recursion relations can be linearized in the vicinity of the fixed point t ∗ = | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
The recursion relations can be linearized in the vicinity of the fixed point t ∗ = u ∗ = 0,
by setting t = t ∗ + δt and u = u ∗ + δu, as
d
dℓ
δt
δu
=
� � �
4(n+2)KdΛ
d−2
K
4 − d
2
0
δt
δu
� � �
(IV.45)
In the differential form of the recursion relations, the eigenvalues of the matrix determine
the relevance of operators. Since the above matrix has zero elements on one side, its
eigenvalues are the diagonal elements, and as in the Gaussian model we can identify yt = 2,
and yu = 4 − d. The results at this order are identical to those obtained from dimensional
analysis on the Gaussian model. The only difference is in the eigen–directions. The
exponent yt = 2 is still associated with u = 0, while yu = 4 − d is actually associated
with the direction t = −4u(n + 2)KdΛd−2/K. This agrees with the shift in the transition
temperature calculated to order of u from the susceptibility.
For d > 4 the Gaussian fixed point has only one unstable direction associated with yt.
It thus correctly describes the phase transition. For d < 4 it has two relevant directions
and is unstable. Unfortunately, the recursion relations have no other fixed point at this
order and it appears that | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
Unfortunately, the recursion relations have no other fixed point at this
order and it appears that we have learned little from the perturbative RG. However, since
we are dealing with an alternating series we can anticipate that the recursion relations at
the next order are modified to
dt
dℓ
du
dℓ
=
2t +
4u(n + 2)KdΛd
t + KΛ2
− Au2
= (4 − d)u − Bu2
62
,
(IV.46)
with A and B positive. There is now an additional fixed point at u ∗ = (4 − d)/B for
d < 4. For a systematic perturbation theory we need to keep the parameter u small. Thus
the new fixed point can be explored systematically only for small ǫ = 4 − d; we are led to
consider an expansion in the dimension of space in the vicinity of d = 4! For a calculation
valid at O(ǫ) we have to keep track of terms of second order in the recursion relation for
u, but only to first order in that of t. It is thus unnecessary to calculate the term A in the
above recursion relation.
63
MIT OpenCourseWare
http://ocw.mit.edu
8.334 Statistical Mechanics II: Statistical Physics of Fields
Spring 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
D-Lab
Spring
2010
Development through
Dialogue, Design and Dissemination
1
Today’s Class
• Logistics
• Design Box Presentations
• Design, Innovation, Invention and the
Design Process
• Discussion
– Readings
• Case Studies
2Some Logistics
• Turning in Homework
• Course website
• Textbooks
3
Technology Boxes
• Which one is your favorite?
• Which one exempifies the trade-offs
that were made
• 2 minutes or less!
4
Design, Innovation
and Invention
5invent: to be the first to think of, make, or
use something
design: to work out or create the form or
structure of something
Source: Encarta® World English Dictionary © 1999 Microsoft Corporation.
All rights reserved. Developed for Microsoft by Bloomsbury
Publishing Plc. This content is excluded from our Creative Commons license. For more
information, see http://ocw.mit.edu/fairuse .
6
7Innovation
Clear plastic bottles poking through
roof capture sunlight to illuminate
windowless rooms
http://www.youtube.com/watch?v=C
S3764DmIP4
8Harder problems lead to
better inventions
Shawn Frayne
9Challenges in Design
• Tradeoffs
• Dynamics and long-term effects of use
• Details
• Time Pressures
• Economics
• Use and mis-use
• Ethics
10
The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
11The Creativity Caveat
• Don’t let the process detract
from the product
12The Changing Approach
13The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
14The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
15The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
15The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
16Design Specifications
• Translate customer needs into
quantitative design performance
targets
• Define internal basis for measuring
success
• Capture the necessary characteristics
for a successful product
• Provide a basis for resolving trade-offs
17Translating Customer
Needs
Need DesignAttribute Units Owner Easy assembly Assembly time seconds Floyd Safe Structural safetyfactor Lisa Safe Fatigue life cycles Nathan Magical Works like magic subjective Meta 18The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
19Brainstorming Method
• generate lots of ideas
• explore all classes of solutions
• develop new perspectives
• generate usable information
20Brainstorming Rules
• Defer judgment
• Build upon the ideas of others
• One conversation at a time
• Stay focused on the topic
• Encourage wild ideas
21
The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
22The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
23Pugh Chart
24The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
25The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
26The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
27The Design Process
Get feedback
Test
Problem
Gather Information
Think of ideas
Solution
Build
Work out details
Experiment
Choose the best | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
The Design Process
Get feedback
Test
Problem
Gather Information
Think of ideas
Solution
Build
Work out details
Experiment
Choose the best idea
28Design for Developing
Countries
29criteria
“Brute force engineering options
often meet
but
the
somewhere there is a profound
solution, which is simple, cheap,
and beautiful. Hold out for this as
possible.”
long
as
-Kurt
former D-Lab Instructor
Kornbluth
303132Battery-operated
field incubator
$1250
Thermo-electric
field incubator
$500
Commercial incubator photos (left and center) © source unknown.
All rights reserved. This content is excluded from our Creative
Commons license. For more information, see http://ocw.mit.edu/fairuse.
Phase change
incubator
$100
33The Phase Change Incubator
Liquid
Sol
id
020406080100120TimeTemperature•••3435Guiding Principles for
DfDC
• Identify functional requirements
• Encourage participatory development
• Value indigenous knowledge
• Promote local innovation
• Strive for sustainability
36Technology Case
Studies
37Coming up…
• Project Selection (Mar 1)
– Design challenge descriptions due for review by
Wednesday, Feb 17
– Slides due by noon on Wednesday, Feb 24
• Readings on course website
• Homework 1 (due Feb 10)
• Homework 3 (due Feb 10)
38MIT OpenCourseWare
http://ocw.mit.edu
EC.720J / 2.722J D-Lab II: Design
Spring 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
6.897: Selected Topics in Cryptography
Lectures 9 and 10
Lecturer: Ran Canetti
Highlights of past lectures
Presented two frameworks for analyzing protocols:
• A basic framework:
– Only function evaluation
– Synchronous
– Non-adaptive corruptions
– Modular composition (only non-concurrent)
• A stronger framework (UC):
– General reactive tasks
– Asynchronous (can express different types of synchrony)
– Adaptive corruptions
– Concurrent modular composition (universal composition)
Review of the definition:
Ideal process:
Z
Z
Protocol execution:
P1
P2
S
P1
P2
A
P3
P4
P3
P4
F
Protocol P securely realizes F if:
For any adversary A
There exists an adversary S
Such that no environment Z can tell
whether it interacts with:
- A run of π with A
- An ideal run with F and S
Lectures 9 and 10
UC Commitment and Zero-Knowledge
• Quick review of known feasibility results in the UC
framework.
• UC commitments: The basic functionality, Fcom.
•
Impossiblity of realizing Fcom in the plain model.
• Realizing Fcom in the common reference string model.
• Multiple commitments with a single string:
– Functionality Fmcom.
– Realizing Fmcom.
• From UC commitments to UC ZK:
Realizing Fzk in the Fcom-hybrid model.
Questions:
• How to write ideal functionalities that
adequately capture known/new tasks?
do
• Are known protocols UC-secure?
(Do these protocols realize the ideal functionalities
associated with the corresponding tasks?)
• How to design UC-secure protocols?
zcyk02]
Existence results: Honest majority
Multiparty protocols with honest majority:
Thm: Can realize any functionality [C. 01].
(e.g. use the protocols of
Rabin-BenOr89,Canetti-Feige-Goldreich-Naor96]).
[BenOr-Goldwasser-Wigderson88,
Two-party functionalities
•
Known protocols do not work.
(“black-box simulation with rewinding” | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
oldwasser-Wigderson88,
Two-party functionalities
•
Known protocols do not work.
(“black-box simulation with rewinding” cannot be used).
• Many interesting functionalities (commitment, ZK,
•
coin tossing, etc.) cannot be realized in plain model.
In the “common random string model” can do:
– UC Commitment
[Canetti-Fischlin01,Canetti-Lindell-Ostrovsky-Sahai02,Damgard-Nielsen02,
Damgard-Groth03,Hofheinz-QuedeMueler04].
– UC Zero-Knowledge [CF01, DeSantis et.al. 01]
– Any two-party functionality [CLOS02,Cramer-Damgard-
Nielsen03]
(Generalizes to any multiparty functionality with any
number of faults.)
UC Encryption and signature
•
•
Can write a “digital signature functionality” Fsig. Realizing Fsig
is equivalent to “security against chosen message attacks”
as in [Goldwasser-Micali-Rivest88].
–
Using Fsig, can realize “ideal certification authorities” and “ideally
authenticated communication”.
Can write a “public key encryption functionality”, Fpke.
Realizing Fpke w.r.t. non-adaptive adversaries is equivalent
to “security against chosen ciphertext attacks (CCA)” as in
[Rackoff-Simon91,Dolev-Dwork-Naor91,…].
–
Can formulate a relaxed variant of Fpke, that still captures most of the
current applications of CCA security.
– What about realizing Fpke w.r.t. adaptive adversaries?
•
•
As is, it’s impossible.
Can relax Fpke a bit so that it becomes possible (but still very
complicated) [Canetti-Halevi-Katz04]. How to do it simply?
UC key-exchange and secure channels
• Can write ideal functionalities that capture
Key-Exchange and Secure-Channels.
• Can show that natural and practical protocols
are secure: ISO 9798-3, IKEv1, IKEv2,
SSL/TLS,…
• What about password-based key exchange?
• What about modeling symmetric encryption and
message authentication as ideal functionalities?
UC commitments
The commitment functionality, F | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
about password-based key exchange?
• What about modeling symmetric encryption and
message authentication as ideal functionalities?
UC commitments
The commitment functionality, Fcom
1. Upon receiving (sid,C,V,“commit”,x) from
(sid,C), do:
1. Record x
2. Output (sid,C,V, “receipt”) to (sid,V)
3. Send (sid,C,V, “receipt”) to S
2. Upon receiving (sid,“open”) from (sid,C), do:
1. Output (sid,x) to (sid,V)
2. Send (sid,x) to S
3. Halt.
Note: Each copy of Fcom is used for a single commitment/decommitment
Only. Multiple commitments require multiple copies of Fcom.
Impossibility of realizing Fcom in the plain model
Fcom
can be realized:
– By a “trivial” protocol that never generates any output.
(The simulator never lets Fcom to send output to any party.)
– By a protocol that uses third parties as “helpers”.
(cid:198) A protocol is:
– Terminating, if when run between two honest parties, some
output is generated by at least one party.
– Bilateral, if only two parties participate in it.
Theorem: There exist no terminating, bilateral protocols
that securely realize Fcom in the plain real-life model.
(Theorem holds even in the Fauth-hybrid model.)
Proof Idea:
Let P be a protocol that realizes Fcom in the plain model,
and let S be an ideal-process adversary for P, for the
case that the commiter is corrupted.
Recall that S has to explicitly give the committed bit to
Fcom before the opening phase begins. This means that
S must be able to somehow “extract” the committed
value b from the corrupted committer.
However, in the UC framework S has no advantage over a
real-life verifier. Thus, a corrupted verifier can essentialy
run S and extract the committed bit b from an honest
committer, before the opening phase begins, in
contradiction to the secrecy of the commitment.
More precisely, we proceed in two steps:
(I) Consider the following environment Zc and real-life adversary Ac
that controls the committer C:
– Ac is the dummy adversary: | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
:
(I) Consider the following environment Zc and real-life adversary Ac
that controls the committer C:
– Ac is the dummy adversary: It reports to Zc any message received
from the verifier V, and sends to V any message provided by Zc.
– Zc chooses a random bit b, and runs the code of the honest C by
instructing Ac to deliver all the messages sent by C.
Once V outputs “receipt”, Zc runs the opening protocol of C with V, and
outputs 1 if the output bit b’ generated by V is equal to b.
From the security of P there exists an ideal-process adversary Sc
such that IDEALFcom
Sc,,Zc ~ EXECP,Ac,Zc. But:
–
In the real-life mode, b’, the output of V, is almost always the same as
the bit b that secretly Z chose.
– Consequently, also in the ideal process, b’=b almost always.
– Thus, the bit b’’ that S provides Fcom at the commitment phase is
almost always equal to b.
(II) Consider the following environment Zv and real-life adversary Av
that controls the verifier V:
– Zv chooses a random bit b, gives b as input to the honest commiter,
and outputs 1 if the adversary output a bit b’=b.
– Av runs Sc. Any message received from C is given to Sc, and any
message generated by Sc is given to C. When Sc outputs a bit b’ to be
given to Fcom, Av outputs b’ and halts.
Notice that the view of Sc when run by Av is identical to its view when
interacting with Zc in the ideal process for Fcom. Consequently,
from part (I) we have that in the run of Zv and Av almost always
b’=b.
However, when Zv interacts with any simulator S in the ideal process
for Fcom, the view of S is independent of b. Thus Zv outputs 1 w.p.
at most ½.
This contradicts the assumption that P securely realizes Fcom.
The common reference string functionality
Functionality Fcrs
(with prescribed distribution D)
1. Choose a value r from distribution D, and
send r to the adversary.
2. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
crs
(with prescribed distribution D)
1. Choose a value r from distribution D, and
send r to the adversary.
2. Upon receiving (“CRS”,sid) from party P,
send r to P.
Note: The Fcrs-hybrid model is essentially the “common reference string
model”, as usually defined in the literacture (cf., Blum-Feldman-Micali89).
In particular: An adversary in the Fcrs-hybrid model expects to get the value of
the CRS from the ideal functionality. Thus, in a simulated interaction, the
simulator can choose the CRS by itself (and in particular it can know trapdoor
information related to the CRS).
Theorem: If trapdoor permutation pairs exist then there
exist terminating, bilateral protocols that realize Fcom
in the (Fauth,Fcrs)-hybrid model.
Remarks:
•
•
Here we’ll only show the [CF01] construction, that is
based on claw-free pairs of trapdoor permutations.
[DG03] showed that UC commitments imply key
exchange, so no black-box constructions from OWPs
exist.
• More efficient constructions based on Paillier’s
assumption exist [DN02, DG03, CS03].
Realizing Fcom in the Fcrs-hybrid model
• Roughly speaking, we need to make sure
that the ideal model adversary for Fcom
can:
– Extract the committed value from a corrupted
committer.
– Generate commitments that can be opened in
multiple ways.
– Explain internal state of committer and verifier
upon corruption (for adaptive security).
First attempt
• To obtain equivocability:
– Let f={f0, f1, f0
-1, f1
permutations. That is:
-1} be a claw-free pair of trapdoor
• f0, f1 are over the same domain.
• Given fi and x it is easy to compute f (x).
• Given fi
• Given only f0, f1, it is hard to find x0, x1 such that f0 (x0)=f1 (x1).
-1 and x it is easy to compute f -1(x).
i
i
– Commitment Scheme:
• CRS: f0,f1
• To commit | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
easy to compute f -1(x).
i
i
– Commitment Scheme:
• CRS: f0,f1
• To commit to bit b, choose random x in the domain of f
and send fb(x). To open, send b,x.
– Simulator chooses the CRS so that it knows the trapdoors f0
Now can equivocate: find x0,x1 s.t. f0(x0)=f1(x1)=y, send y.
-1,f1 .
-1
• But: Not extractable…
Second attempt
• To add extractability:
– Let (G,E,D) be a semantically secure encryption scheme.
– Commitment Scheme:
• Let G(k)=(e,d). CRS: f0,f1, e.
• To commit to a bit b, choose random x,r, and send fb(x),Ee(r,x).
To open, send b,x,r.
– Simulator knows choose the CRS such that it knows the
decryption key d. So it can decrypt and extract b.
• But: lost equivocability…
Third attempt
• To restore equivocability:
– Scheme:
• CRS: f0,f1, e
• To commit to b:
– choose random x,r0,r1
– send fb(x),Ee(rb,x),Ee(r1-b,0)
• To open, send b,x,rb. (Don’t send r1-b.)
– To extract, simulator decrypts both encryptions and
finds x.
– To equivocate, simulator chooses x0,x1,r0,r1, such
that f0(x0)=f1(x1)=y and sends y,Ee(r0,x0),Ee(r1,x1).
The protocol (UCC) for static adversaries
• On input (sid,C,V,“commit”,b) C does:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
– Compute y= fb(x), cb=Ee(rb,x), c1-b=Ee(r1-b,0), and send
(sid,C,V,y,c0,c1) to V.
• When receiving (sid,C,V,y,c0,c1) from C | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
and send
(sid,C,V,y,c0,c1) to V.
• When receiving (sid,C,V,y,c0,c1) from C, V outputs
(sid,C,“receipt”,C).
• On input (sid,“open”), C does:
– Send b,x,rb to V.
• Having received b,x,r, V verifies that Fb(x)=y and
cb=Ee(r,x). If verification succeeds then output
(“Open”,sid,cid,C,b). Else output nothing.
Proof of security (static case)
Let A be an adversary that interacts with parties running
protocol UCC in the Fcrs-hybrid model.
We construct a simulator S in the ideal process for Fcom
and show that for any environment Z,
IDEALFcom
S,Z ~ EXECucc,A,Z
Simulator S:
• Choose a c.f.p. (f0, f1, f0
• Run a simulated copy of A and give it the CRS (f0, f1, e).
• All messages between A and Z are relayed unchanged.
•
-1, f1
If the committer C is uncorrupted:
–
-1) and keys (e,d) for the enc. Scheme.
If S is notified by Fcom that C wishes to commit to party V then simulate
-1(y),
for A a commitment from C to V: Choose y, compute x0=f0
c0=Ee(r0,x0), c1=Ee(r1,x1), and send (y, c0, c1) from C to V. When A
delivers this message to V, send “ok” to Fcom.
If S is notified by Fcom that C opened the commitment to value b, then S
simulates for A the opening message (b, xb, rb) from C to V.
-1(y),x1= f1
–
•
If C is corrupted:
–
–
•
If a corrupted C sends a commitment (y, c0, c1) to V, then S decrypts c0 and c1:
-1(y), then send (sid,C,V,“commit”,0) to Fcom.
-1(y), then send (sid,C,V,“commit”, | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
then send (sid,C,V,“commit”,0) to Fcom.
-1(y), then send (sid,C,V,“commit”,1) to Fcom.
If c0 decrypts to x0 where x0=f0
If c1 decrypts to x1 where x1=f1
•
-1(y) and cb’=Ee(r,x)),
If C sends a valid opening message (b’,x,r) (I.e., x=fb’
then S checks whether b’ equals the bit sent to Fcom. If yes, then S sends
(sid, “Open”) to Fcom. Otherwise, S aborts the simulation.
Analysis of S:
Let Z be an environment. define first the following hybrid interaction HYB:
Interaction HYB is identical to IDEALFcom
S,Z, except that when S generates
commitments by uncorrupted parties, it “magically learns” the real bit b,
and then uses real (not fake) commitments. That is, the commitment is
(y, c0, c1) where c1-b=Ee(r1-b,0).
We proceed in two steps:
1.
2.
Show that EXECucc,A,Z ~ HYB.
This is done by reduction to the security of the claw-free pair.
Show that HYB ~ IDEALFcom
S,Z.
This is done by reduction to the semantic security of the encryption
scheme.
Step 1: Show that EXECucc,A,Z ~ HYB:
•
•
Note that the interactions EXECucc,A,Z and HYB are identical, as long
as the adversary does not abort in an opening of a commitment made
by a corrupted party.
We show that if S aborts with probability p then we can find claws in
(f0, f1) With probability p. That is, construct the following adv. D:
–
Given (f0, f1), D simulates an interaction between Z and S (running A) when
the c.f.p. in the CRS is (f0, f1). D plays the role of S for Z and A. Since D
sees all the messages sent by Z, it knows the bits committed to be the
uncorrupted parties, and can simulate the interaction perfectly.
Furthermore, whenever S aborts then D finds a claw in ( | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
bits committed to be the
uncorrupted parties, and can simulate the interaction perfectly.
Furthermore, whenever S aborts then D finds a claw in (f0, f1): S
aborts if A provides a valid commitment to a bit b and then a valid
opening to 1-b. But in this case A generated a claw!
Step 2: Show that HYB ~ IDEALFcom
S,Z:
Recall that the difference between HYB and IDEALFcom
the commitments generated by S are real, whereas in IDEALFmcom
these commitments are fake.
S,Z is that in HYB
S,Z
Assume an env. Z and adv. A that distinguish between the two interactions.
Construct an adversary B that breaks the semantic security of (E,D):
Given encryption key e, B simulates an interaction between Z and S (running
A) when the encryption key in the CRS is e. B plays the role of S for Z
and A. Furthermore, When S needs to generates a commitment
(y, c0, c1), B does:
•
•
Cb is generated honestly as cb=Ee(rb,xb). (Recall, B knows b.)
B asks its encryption oracle to encrypt one out of (0, x1-b) and sets the
answer C* to be c1-b.
Analysis of B:
•
If C*=E(0) then the simulated Z sees an HYB interaction.
If C*=E( x1-b) then the simulated Z sees an IDEALFcom
•
S,Z interaction.
Since Z distinguishes between the two, B breaks the semantic security of the
encryption scheme.
Dealing with adaptive adversaries
Recall the protocol (UCC) for static adversaries
• On input (sid,C,V,“commit”,b) C does:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
– Compute y= fb(x), cb=Ee(rb,x), c1-b=Ee(r1-b,0),
and send (sid,C,V,y,c0,c1) to V.
• When receiving (sid,C,V,y,c0,c1) from C, V outputs
(sid,C,“receipt”,C).
• On input (sid,“open”), C does | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
,c0,c1) from C, V outputs
(sid,C,“receipt”,C).
• On input (sid,“open”), C does:
– Send b,x,rb to V.
• Having received b,x,r, verifies that Fb(x)=y and cb=Ee(r,x).
If verification succeeds then output (“Open”,sid,cid,C,b).
Else output nothing.
Problem: When the committer is corrupted, it needs to present
the randomness r1-b. Now S is stuck…
Solutions:
• Erase r1-b immediately after use inside the encryption.
•
If do not trust erasures: Use an encryption where ciphertexts
are “pseudorandom”. Then the commitment protocol changes
to:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
– Let y= fb(x), cb=Ee(rb,x), c1-b=r1-b, and send (sid,C,V,y,c0,c1) to V.
Simulation changes accordingly.
Note: Secure encryption with pseudorandom ciphertexts exists
given any trapdoor permutation: Use the Goldreich-Levin
HardCore bit.
How to re-use the CRS?
Functionality Fcom handles only a single commitment.
Thus, to obtain multiple commitments one needs
multiple copies of Fcom . When replacing each copy of
Fcom with a protocol P that realizes it in the Fcrs-hybrid
model, one obtains multiple copies of P, which in turn
use multiple independent copies of Fcrs…
• Can we realize multiple copies of Fcom using a single
copy of Fcrs?
• How to formalize that?
The multi-instance commitment
functionality, Fmcom
1. Upon receiving (sid,cid,C,V,“commit”,x) from
(sid,C), do:
1. Record (cid,x)
2. Output (sid,cid,C,V, “receipt”) to (sid,V)
3. Send (sid,cid,C,V, “receipt”) to S
2. Upon receiving (sid,cid“open”) from (sid,C), do:
1. Output (sid,cid,x) to (sid,V)
2. Send (sid,cid,x) to S
How to realize Fmcom?
• | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
id,x) to (sid,V)
2. Send (sid,cid,x) to S
How to realize Fmcom?
• Trivial solution: Run multiple copies of protocol ucc,
where each copy uses its own copy of Fcrs…
• But, can we do it with a single copy of Fcrs?
• Does protocol ucc do the job?
Attempt 1: Run as is.
Bad: Adversary can copy commitments.
Attempt 2: Include the committer’s id inside the encryption. I.e., in the
commitment phase compute cb=Ee(rb,C.x), c1-b=Ee(r1-b,C.0).
Bad: Adversary can change the encrypted id inside c0,c1.
Attempt 3: Use CCA2 (“non-malleable”) encryption.
Works…
The protocol (UCMC) for static adversaries
• On input (“commit”,V,b,sid,cid) C does:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
(Now e is the encryption key of a CCA2-secure encryption scheme.)
– Compute y= fb(x), cb=Ee(rb,C.x), c1-b=Ee(r1-b,C.0), and send
(sid,cid,C,V,y,c0,c1) to V.
• When receiving (sid,cid,C,V,y,c0,c1) from C, V outputs (“receipt”,C,sid,cid).
• On input (“open”,sid,cid), C does:
– Send b,x,rb to V.
• Having received b,x,rb, V verifies that Fb(x)=y and cb=Ee(rb,C.x), and that cid
never appeared before in a commitment of C.
If verification succeeds then output (“Open”,sid,cid,C,b).
Else output nothing.
Proof of security (static case)
• The simulator S is identical to that of UCC, except that
here it handles multiple commitments and
decommitments.
• Analysis of S:
– Define the same hybrid interaction HYB.
– The proof that EXECucc,A,Z ~ HYB remains essentally the
same, except that here there are many commitments
and decommitments. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
The proof that EXECucc,A,Z ~ HYB remains essentally the
same, except that here there are many commitments
and decommitments.
– The proof that HYB ~ IDEALFmcom
is similar in
S,Z
structure to the proof for the single commitment case,
except that here the reduction is to the CCA security of
the encryption:
Simulator S:
• Choose a c.f.p. (f0, f1, f0
• Run A and give it the CRS (f0, f1, e).
• All messages between A and Z are relayed unchanged.
• Commitments by uncorrupted parties:
-1, f1
-1) and keys (e,d) for the enc. Scheme.
–
–
If S is notified by Fmcom that an uncorrupted C wishes to commit to party
V with a given cid, then simulate for A a commitment from C to V:
Choose y, compute x0=f0
and send (y, c0, c1) from C to V. When A delivers this message to V,
send “ok” to Fmcom.
If S is notified by Fmcom that C opened the commitment cid to value b,
then it simulates for A an opening message (b, xb, rb) from C to V.
-1(y), y, c0=Ee(r0,C.x0), c1=Ee(r1,C.x1),
-1(y),x1= f1
• Commitments by corrupted parties:
–
–
If A sends a commitment (cid, y, c0, c1) in the name of a corrupted committer C
to some V, then S decrypts c0. If c0 decrypts to C.x0 where x0=f0
b=0. Else b=1. Then, send (“commit”,C,V,b,sid,cid) to Fmcom.
If A sends a valid opening message (b’,x,r) for some cid (I.e., x=fb’
cb’=Ee(r,C.x)), and b’=b, then S sends (“Open”,sid,cid) to Fmcom.
If b’ != b, then S aborts the simulation
-1(y), then let
- | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
”,sid,cid) to Fmcom.
If b’ != b, then S aborts the simulation
-1(y), then let
-1(y),
Analysis of S:
Let Z be an environment. define first the following hybrid interaction HYB:
Interaction HYB is identical to IDEALFmcom
S,Z, except that when S
generates commitments by uncorrupted parties, it “magically learns” the
real bit b, and then uses real (not fake) commitments. That is, the
commitment is (y, c0, c1) where c1-b=Ee(r1-b,C.0).
We proceed in two steps:
1.
Show that EXECucc,A,Z ~ HYB.
This is done by reduction to the security of the claw-free pair.
S,Z.
Show that HYB ~ IDEALFmcom
2.
This is done by reduction to the security of the encryption scheme.
Step 1: Show that EXECucc,A,Z ~ HYB:
•
•
Note that the interactions EXECucc,A,Z and HYB are identical, as long
as the adversary does not abort in an opening of a commitment made
by a corrupted party.
We show that if S aborts with probability p then we can find claws in (f0,
f1) With probability p. That is, construct the following adv. D:
–
Given (f0, f1), D simulates an interaction between Z and S (running A) when
the c.f.p. in the CRS is (f0, f1). D plays the role of S for Z and A. Since D
sees all the messages sent by Z, it knows the bits committed to be the
uncorrupted parties, and can simulate the interaction perfectly.
Furthermore, whenever S aborts then D finds a claw in (f0, f1): S
aborts if A provides a valid commitment to a bit b and then a valid
opening to 1-b. But in this case A generated a claw!
Step 2: Show that HYB ~ IDEALFmcom
S,Z:
Recall that the difference between HYB and IDEALFmcom
the commitments generated by S are real, whereas in IDEALFmcom
these commitments are fake.
S,Z is that in HY | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
Fmcom
the commitments generated by S are real, whereas in IDEALFmcom
these commitments are fake.
S,Z is that in HYB
S,Z
Assume a env. Z that distinguishes between the two interactions. Construct a
CCA-adversary B that breaks the security of (E,D). (In fact, B will
interact in a Left-or-Right CCA interaction):
Given encryption key e, B simulates an interaction between Z and S (running
A) when the encryption key the CRS is e. B plays the role of S for Z and
A. Furthermore:
– When S needs to generates a commitment (y, c0, c1), B does:
•
•
Cb is generated honestly as cb=Ee(rb,C.xb). (Recall, B knows b.)
B asks its encryption oracle to encrypt one out of (0, C.x1-b) and sets the
answer to be c1-b.
– When A sends a commitment (y, c0, c1), B does:
•
If either c0 or c1 are test ciphertexts then they can be safely ignored,
since they contain an ID of an uncorrupted party. Else, B asks its
decryption oracle to decrypt, and continues running S.
Note:
•
•
If B’s oracle is a “Left” oracle (ie, all the test ciphertexts are encryptions
of ID.0) then the simulated Z sees an HYB interaction.
If B’s oracle is a “Right” oracle (ie, all the test ciphertexts are
encryptions of ID. x1-b) then the simulated Z sees an IDEALFmcom
interaction.
S,Z
Since Z distinguished between the two, B breaks the LR-CCA security of the
encryption scheme.
Dealing with adaptive corruptions
Use the same trick as in the single-commitment case.
Question: How to obtain CCA-secure encryption with p.r.
ciphertexts?
– Cramer-Shoup…
– Use double encryption: E(x)=E’(E”(x)), where:
• E’ is CPA-secure with p.r. ciphertext (e.g., standard
encryption based on hard-core bits of tradoor
permutations).
• E” is CCA-secure.
Note: E is | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
standard
encryption based on hard-core bits of tradoor
permutations).
• E” is CCA-secure.
Note: E is not CCA-secure, but is good enough…
UC Zero-Knowledge from UC commitments
• Recall the ZKPoK ideal functionality, Fzk, and the
version with weak soundness, Fwzk.
• Recall the Blum Hamiltonicity protocol
• Show that, when cast in the Fcom-hybrid model, a
single iteration of the protocol realizes Fwzk.
(This result is unconditional, no reductions or
computational assumptions are necessary.)
• Show that can realize Fzk using k parallel copies
of Fwzk.
The ZKPoK functionality Fzk (for relation H(G,h)).
1. Receive (sid, P,V,G,h) from (sid,P).
Then:
1. Output (sid, P, V, G, H(G,h)) to (sid,V)
2. Send (sid, P, V, G, H(G,h)) to S
3. Halt.
The weak ZKPoK functionality Fwzk
(for relation H(G,h)).
1. Receive (sid, P, V,G,h) from (sid,P).
Then:
1.
If P is corrupted then:
• Choose b R {0,1} and send to S.
• Obtain a bit b’ and a cycle h’ from S, and replace hh’.
If H(G,h)=1 or b’=b=1 then set v1. Else v0.
2.
3. Output (sid, P, V, G,v) to (sid,V) and to S.
4. Halt.
The Blum protocol in the Fcom-hybrid model
(“single iteration”)
Input: sid,P,V, graph G, Hamiltonian cycle h in G.
• P (cid:198) V: Choose a random permutation p on [1..n].
Let bi be the I-th bit in p(G).p. Then, for each i send to
Fcom: (sid.i,P,V,“Commit”,b ) .
i
• V (cid:198) P: When getting “receipt”, send a random | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
: (sid.i,P,V,“Commit”,b ) .
i
• V (cid:198) P: When getting “receipt”, send a random bit c.
• P (cid:198) V:
If c=0 then send Fcom: (sid.i,“Open”) for all i.
If c=1 then open only commitments of edges in h.
• V accepts if all the commitment openings are received from
and in addition:
Fcom
– If c=0 then the opened graph and permutation match G
– If c=1, then h is a Hamiltonian cycle.
Claim: The Blum protocol securely realizes Fwzk
H in the
Fcom–hybrid model
Proof sketch: Let A be an adversary that interacts with the
protocol. Need to construct an ideal-process adversary S that
fools all environments. There are four cases:
1. A controls the verifier (Zero-Knowledge):
S gets input z’ from Z, and runs A on input z’. Next:
–
If value from Fzk is (G,0) then hand (G,”reject”) to A.
If value from Fzk is (G,1) then simulate an interaction for V:
•
•
•
•
For all I, send (sid_i, “receipt”) to A.
If obtain the challenge c from A.
If c=0 then send openings of a random permutation of G to A
If c=1 then send an opening of a random Hamiltonian tour to A.
The simulation is perfect…
2. A controls the prover (weak extraction):
S gets input z’ from Z, and runs A on input z’. Next:
I. Obtain from A all the “commit” messages to Fcom and record the
committed graph and permutation. Send (sid,P,V,G,h=0) to Fwzk.
II. If the bit b obtained from Fwzk is 1 (i.e., Fwzk is going to allow cheating)
then send the challenge c=0 to A.
If b=0 (I.e., no cheating allowed in this run) then send c=1 to A.
III. Obtain A’s opening of the commitments in step 3 of the protocol.
If c=0, all openings are obtained and are consistent with G, then | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
A’s opening of the commitments in step 3 of the protocol.
If c=0, all openings are obtained and are consistent with G, then send
b’=1 to Fwzk . If c=0 and some openings are bad or inconsistent with
G then send b’=0 (I.e., no cheating, and V should not accept.)
If c=1 then obtain A’s openings of the commitments to the Hamiltonian
cycle h’. If h’ is a Hamiltonian cycle then send h’ to Fwzk . Otherwise,
send h’=0 to Fwzk .
2. A controls the prover (weak extraction):
Analysis of S:
The simulation is perfect. That is, the joint view of the
simulated A together with Z is identical to their view in
an execution in the Fcom –hybrid model:
– V’s challenge c is uniformly distributed.
–
If c=0 then V’s output is 1 iff A opened all commitments
and the permutation is consistent with G.
If c=1 then V’s output is 1 iff A opened a real Hamiltonian
cycle in G.
–
3. A controls neither party or both parties: Straightforward.
From Fwzk
R to Fzk
R
A protocol for realizing Fzk
• P(x,w): Run k copies of Fwzk
R in the Fwzk
R -hybrid model:
R , in parallel. Send
(x,w) to each copy.
• V: Run k copies of Fwzk
R , in parallel. Receive (x ,b )
i
i
from the i-th copy. Then:
– If all x’s are the same and all b’s are the same then
output (x,b).
– Else output nothing
Analysis of the protocol
Let A be an adversary that interacts with the protocol in the Fwzk
R -hybrid
R and fools all environments. There are four cases:
model. Need to construct an ideal-process adversary S that interacts
with Fzk
A controls the verifier: In this case, all A sees is the value (x,b) coming
in k times, where (x,b) is the output value. This is easy to simulate:
S obtains (x,b) from TP, gives it to | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
times, where (x,b) is the output value. This is easy to simulate:
S obtains (x,b) from TP, gives it to A k times, and outputs whatever A
outputs.
A controls the prover: Here, A should provide k inputs x1 …xk to the k
R , and
copies of Fwzk
should give witnesses w1 …wk in return. S runs A, obtains x1 …xk,
gives it k random bits b1 …bk, and obtains w1 …wk. Then:
R , obtain k bits b1 …bk from these copies of Fwzk
If all the x’s are the same and all copies of Fwzk
R would accept, then
R . (If didn’t find
find a wi such that R(x,w )=1, and give (x,w ) to Fzk
i
such wi then fail. But this will happen only if b1 …bk are all 1, and
this occurs with probability 2-k. )
Else give (x,w’) to to Fzk
R , where w’ is an invalid witness.
i
1.
2.
–
–
Analysis of S:
– When the verifier is corrupted, the views of Z from both
interactions are identically distributed.
– When the prover is corrupted, conditioned on the event
that S does not fail, the views of Z from both interactions
are identically distributed. Furthermore, S fails only if b1
…bk are all 1, and this occurs with probability 2-k.
Note: The analysis is almost identical to the non-concurrent
case, except that here the composition is in parallel. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
Welcome
back
to 8.033!
Emmy Noether
1882-1935
Image courtesy of Wikipedia.
MIT Course 8.033, Fall 2006, Lecture 2
Max Tegmark
PRACTICAL STUFF:
• PS1 due Friday 4PM
(cid:129) Symmetry notes posted
TODAY’S TOPIC: SYMMETRY IN PHYSICS
(cid:129) Key concepts: frame, inertial frame, transformation, invariant,
invariance, symmetry, relativity
(cid:129) Key people: Galileo Galileo, Emmy Noether
(cid:129) Symmetry examples: translation, rotation, parity, boost
(cid:129) Million Dollar question: what are the symmetries of physics?
What do we mean by symmetry?
WHAT’S THE
SYMMETRY OF
THE UNIVERSE?
OF PHYSICS?
?
Figures by MIT OCW.
?
Figures by MIT OCW.
?
Figures by MIT OCW.
WHAT’S THE
SYMMETRY
OF
CLASSICAL
MECHANICS?
Figures by MIT OCW.
?
?
Figures by MIT OCW.
?
Figures by MIT OCW.
SO WHICH DO YOU
TRUST MORE:
Classical Mechanics, or
E&M? | https://ocw.mit.edu/courses/8-033-relativity-fall-2006/055852a54ae99cb8b6a6386f84e71bef_lecture2_symme1.pdf |
ESD.86
Markov Processes and their
Application to Queueing
Richard C. Larson
March 5, 2007
Photo courtesy of Johnathan Boeke. http://www.flickr.com/photos/boeke/134030512/
Outline
(cid:139) Spatial Poisson Processes, one more time
(cid:139) Introduction to Queueing Systems
(cid:139) Little’s Law
(cid:139) Markov Processes
Spatial
Poisson
Processes
Courtesy of Andy Long. Used with permission.
http://zappa.nku.edu/~longa/geomed/modules/ss1/lec/poisson.gif
Spatial Poisson Processes
(cid:139) Entities distributed in space (Examples?)
(cid:139) Follow postulates of the (time) Poisson
process
– λdt = Probability of a Poisson event in dt
– History not relevant
– What happens in disjoint time intervals is
independent, one from the other
– The probability of a two or more Possion events in
dt is second order in dt and can be ignored
(cid:139) Let’s fill in the spatial analogue…..
Set S has area A(S).
Poisson intensity is γ
Poisson entities/(unit area).
X(S) is a random variable
X(S) = number of Poisson
entities in S
S
P{X(S) = k} =
(γA(S))k
k!
e−γA (S ), k = 0,1,2,...
Nearest Neighbors: Euclidean
Define D2= distance from a random point
to nearest Poisson entity
Want to derive fD2
Happiness:
(r).
FD2
FD2
FD2
(r) ≡ P{D2 ≤ r} = 1− P{D2 > r}
(r) = 1− Pr ob{no Poisson entities in circle of radius r}
r
(r) = 1− e−γπr 2
r ≥ 0
f D2
(r) =
d
dr
FD2
(r) = 2rγπe−γπr 2
r ≥ 0
Rayleigh pdf with parameter 2γπ
Random
Point
Nearest Neighbors: Euclidean
Define D2= distance from a random point
to nearest Poisson entity
Want to | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
Random
Point
Nearest Neighbors: Euclidean
Define D2= distance from a random point
to nearest Poisson entity
Want to derive fD2
1
γ
E[D2] = (1/2)
(r).
"Square Root Law"
2 = (2 −π/2)
σD2
1
2πγ
f D2
(r) =
d
dr
FD2
(r) = 2rγπe−γπr 2
r ≥ 0
Rayleigh pdf with parameter 2γπ
r
Random
Point
Nearest Neighbor: Taxi Metric
FD1
FD1
FD1
(r) ≡ P{D1 ≤ r}
(r) = 1− Pr{no Poisson entities in diamond}
(r) = 1− e−γ2r 2
f D1
(r) =
d
dr
FD1
(r) = 4rγe−2γr 2
r
How Might you Derive the PDF
fo the kth Nearest Neighbor?
Blackboard exercise!
To Queue or Not to Queue,
That May be a Question!
Queueing System
Arriving Customers
Service
Facility
Queue of Waiting Customers
Departing Customers
Figure by MIT OCW.
Servers:
Statistical Clones?
Finite or
Infinite?
Finite or
Infinite?
Queue
Discipline:
How queuers
Are selected
for service
Source: Larson and Odoni, Urban Operations Research
What Kinds of Queues Occur in
Systems of Interest to ESD?
ESD
Queues?
Photos courtesy, from top left, clockwise: U.S. FAA: Flickr user “*keng” http://www.flickr.com/photos/kengz/67187556/;
Luke Hoersten http://www.flickr.com/photos/lukehoersten/532375235/)
Little’s Law for Queues
a(t)
L(t)
d(t)
Source: Larson and Odoni, Urban Operations Research
a(t) = cumulative # arrivals to system in (0,t]
d(t) = cumulative # departures from system in (0,t]
L(t) = a(t) − d(t)
L(t) = number of customers in the system
(in queue and in service) at time t
Little’s | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
(t) − d(t)
L(t) = number of customers in the system
(in queue and in service) at time t
Little’s Law for Queues
t∫
γ(t) =
[a(τ) − d(τ)]dτ
t∫
γ(t) = total number of customer minutes spent in the system
L(τ)dτ
=
0
0
a(t) = cumulative # arrivals to system in (0,t]
d(t) = cumulative # departures from system in (0,t]
L(t) = a(t) − d(t)
L(t) = number of customers in the system
(in queue and in service) at time t
Let’s Get an expression for Each of 3 Quantities
λt ≡ average customer arrival rate = a(t) /t
W t ≡ average time that an arrived customer has spent in the system
W t = γ(t) /a(t)
Lt = time average # customers in system during (0,t]
Lt =
1
t
t∫
0
L(τ)dτ= γ(t) /t
= λtW t
=
Lt =
a(t)
t
γ(t)
a(t)
γ(t)
t
In the limit,
L = λW , Little's Law
Key Issues
L = λW
(cid:139) L in a time-average. Explain
(cid:139)λis average of arrival rate of customers
who actually enter the system
(cid:139) W is average time in system (in queue
and in service) for actual customers
who enter the system
More Issues
L = λW
(cid:139) Little’s Law is general. It does not
depend on
– Arrival process
– Service process
– # servers
– Queue discipline
– Renewal assumptions, etc.
(cid:139) It just requires that the 3 limits exist.
Still More
Issues
L = λW
(cid:139) What about balking? Reneging? Finite
capacity?
(cid:139) Do we need iid service times? Iid inter-
arrival times?
(cid:139) Do we need each busy period to behave
statistically identically?
(cid:139) Look at role of γ(t). Can change queue
statistics by changing queue discipline.
Cumulative # of Arrivals
FCFS=First | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
139) Look at role of γ(t). Can change queue
statistics by changing queue discipline.
Cumulative # of Arrivals
FCFS=First Come, First Served
SJF=Shortest Job First
FCFS
SJF
L(t)
LSJF(t)
0
What about LJF,
Longest Job 1st?
t = time
“System” is
General
L = λW
(cid:139) Our results apply to entire queue
system, queue plus service facility
(cid:139) But they could apply to queue only!
S.F.
Lq = λW q
(cid:139) Or to service facility only!
LSF = λW SF = λ/μ
1/μ= mean service time
All of this means,
“You buy one, you get the other 3 for free!”
W =
1
μ
+W q
L = Lq + LSF = Lq +
λ
μ
L = λW
Utilization Factor ρ
(cid:139) Single Server. Set
Y ={1 if server is busy
0 if server is idle
E[Y] = 1* P{server is busy} + 0 * P{server is idle}
E[Y] = 1*ρ+ 0 = ρ= E[# customers in SF] = ?
(cid:139) E[Y] is time-average number of
customers in the SF
(cid:139) Buy Little’s Law,
ρ= λ/μ < 1
Utilization Factor ρ
(cid:139) Similar logic for N identical parallel
servers gives
ρ= (
λ
)
N
1
μ
=
λ
Nμ
< 1
(cid:139) Here, λ/μcorresponds to the time-
average number of servers busy
Markov Queues
Markov here means, “No Memory”
Source: Larson and Odoni, Urban Operations Research
Balance of Flow Equations
λ0P0 = μ1P1
(λn + μn )Pn = λn−1Pn−1 + μn +1Pn +1 for n = 1,2,3,...
To be continued………….. | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
3.37 (Class 7)
Review
• Book on explosive welds
• Wire Bonding
o Diagram on board (squeeze wire-thermal compression weld)
o Perimeter Bonding
o Up to 200-400 I/O (approx 50 per side, can make double rows but lose
real-estate on semiconductor)
• TAB Bonding
o Diagram on board
o Solder connection (gold and tin-plated connection, use a heated platen of
tungsten or aluminum, perhaps molybdenum)
o Perimeter Bonding
o Up to 400-800 I/O (approx 100 per side)
• Controlled Collapse Chip Connection (C4)
o Diagram on board (internal connection of silicon chip to substrate with
small solder balls, tin or indium-tin, sometimes around lead which doesn’t
melt so that chip stays above substrate, about 4mil)
o Balls help to self align by surface tension
o Ball Grid Array, sometimes refers to the connection of package to circuit
board, distinction between C4 and BGA becomes blurry when chip gets
mounted directly to the board.
o Don’t get much bigger than 1cm per side on chip due to thermal expansion
o Invented in 1960’s by IBM, just come into its own
o 1000-2000 I/O
o Area Bonding (lose some real-estate, but distributed throughout)
• Size of chips and speed
o Speed of light = 3x10^8 m/s
o Lambda = c*t
o Say 3GHz
o Lambda = 0.01um = 10nm = 1000angstrom
o Even if electrons going at speed of light, signal can’t even get off the chip
o But only going at about 10% of the speed of light, has to do with the
reactance (mostly capacitance) of the chip.
o Alpha chip at 1GHz required special electromagnetic design
o One of the ways around this is to use clockless computers, supposedly
able to get a 2-3x increase in speed.
Today
Adhesive Bonding
• Unique among welding and joining processes
• Only one that just buries the contamination+
• For copy machines | https://ocw.mit.edu/courses/3-37-welding-and-joining-processes-fall-2002/058777e333c43dcdd2ab4faec95745b6_33707.pdf |
Bonding
• Unique among welding and joining processes
• Only one that just buries the contamination+
• For copy machines, essentially bonding toner to paper
o Company X, book starts with “gluons”
o Similar to starting with diagram of binding energy between atoms
o But don’t need to start at this level in adhesive bonding
• Start with surface, then add contaminants
o Oxides (very quickly forms)
o CO2
o H20
• Need something that has a lower surface energy
o Problem bonding to Teflon
• Types
o Type I
(cid:131) Diagram on board
(cid:131) Two pieces of solid, interpose a liquid that “wets” the surface
(cid:131) Separation distance d, radius of curvature r
(cid:131) See formula on board
(cid:131) Soap bubbles obey this equation
(cid:131) Blowing up balloons, need more pressure to start the balloon, small
curvature
(cid:131) Adjust for spherical or cylindrical bubbles
(cid:131) Negative pressure in wetting liquid
(cid:131) Demonstration with Johansen blocks (used by machinists), very
precisely ground blocks, accurate to about 50millionths of an inch,
slide together and they adhere, use oils from hands as the liquid,
even chalk dust can interfere with this bond
o Type II, Mechanical Interlocking
(cid:131) Two rough surfaces with liquid that hardens yields a mechanical
bond
(cid:131) Say 90+% of bonds are just mechanical interlocking
(cid:131) Teflon mechanically interlocked into a porous surface
(cid:131) Demonstration with “Magic Sand”
Wetting
• Young’s equation
o Diagram on board
o Angle formed at solid-liquid-vapor interface
o Used when get to solders
o One of the simplest and most misunderstood equations
o An equation at equilibrium
o Water on waxed car has high theta
o Adding wetting agents in automatic dishwashers to cause “sheeting
action”
o Mercury (toxic), can use gallium (not as toxic as mercury, melts in your
hand) | https://ocw.mit.edu/courses/3-37-welding-and-joining-processes-fall-2002/058777e333c43dcdd2ab4faec95745b6_33707.pdf |
MIT 3.016 Fall 2005
c
� W.C Carter
Lecture 5
25
Sept. 16 2005: Lecture 5:
Introduction to Mathematica IV
Graphics
Graphics are an important part of exploring mathematics and conveying its results. An infor
mative plot or graphic that conveys a complex idea succinctly and naturally to an educated
observer is a work of creative art. Indeed, art is sometimes defined as “an elevated means of
communication,” or “the means to inspire an observation, heretofore unnoticed, in another.”
Graphics are art; they are necessary. And, I think they are fun.
For graphics, we are limited to two and threedimensions, but, with the added possibility
of animation, sound, and perhaps other sensory input in advanced environments, it is possible
to usefully visualize more than three dimensions. Mathematics is not limited to a small number
of dimensions; so, a challenge —or perhaps an opportunity—exists to uses artfulness to convey
higher dimensional ideas graphically.
Basic graphics starts with twodimensional plots.
Mathematica r� Example: Lecture05
Twodimensional Plots
2D plots, plot options, log plots
Plotting Data
Sometimes you will want to plot number that come from elsewhere—otherwise known
as data. Presumably, data will be imported with file I/O. It is useful to plot data within
Mathematica r� so you can compare it to model equations or to fit to an empirical equation.
Threedimensional graphics are typically projected onto the screen. This means that you
need to specify the direction in space from which you will look at the twodimensional pro
jection. You get some depth information in a projection by the perspective ( | https://ocw.mit.edu/courses/3-016-mathematics-for-materials-scientists-and-engineers-fall-2005/05a4a32191bb4a9ae8214d6a731b97e6_lecture_05.pdf |
dimensional pro
jection. You get some depth information in a projection by the perspective (i.e, the trick
that artists use of making parallel lines converge at a noninfinite point. (e.g. 15th cen
tury Italian School, Donetello)). You also get information by changing your viewpoint. In
Mathematica r� you need to specify a ViewPoint that orients the viewer from a certain di
rection and sets the perspective. At a close viewpoint (i.e., magnitude of the ViewPoint vector
MIT 3.016 Fall 2005
c
� W.C Carter
Lecture 5
26
is small), parallel lines converge quickly and perspective–as well as distortion–is enhanced.
For more distant ViewPoints, an object projects more ”flatly” (as in Art Naif) and with less
distortion.
Mathematica r� Example: Lecture05
Three Dimensional Graphics
Plotting three dimensional graphics
Mathematica r� has a “graphical engine” that allows you to add additional graphics to
your plot. Although, it is not efficient, one could use Mathematica r� as a drawing program
like Pourripinte or similar. Mathematica r� has a number of graphics primitives that can
be drawn—it is only a question of asking Mathematica r� to draw a primitive where you
want it.
Mathematica r� Example:
Lecture05
Graphics Primitives
Examples: Circles, Text, Random Walk, Wulff Construction
Because PostScript is one of the graphics primitives, you can draw anything that can
be imaged in another application. You can also import your own drawing and images into
Mathematica r
� . | https://ocw.mit.edu/courses/3-016-mathematics-for-materials-scientists-and-engineers-fall-2005/05a4a32191bb4a9ae8214d6a731b97e6_lecture_05.pdf |
import your own drawing and images into
Mathematica r
� . | https://ocw.mit.edu/courses/3-016-mathematics-for-materials-scientists-and-engineers-fall-2005/05a4a32191bb4a9ae8214d6a731b97e6_lecture_05.pdf |
Electricity and Magnetism
• More on
– Electric Flux
– Gauss’ Law
Feb 20 2002
More on Electric Flux and
Gauss’ Law
Maxwell Equations
(1873)
Feb 20 2002
Electric Flux
Note absence of ‘
‘
Electric Flux ΦE = E A
‘ΦE’ is a Scalar: How much?
I.e. how much field passes through surface A?
Feb 20 2002
A?
• Direction
– Normal to surface
• Magnitude
– Surface Area
• For closed surface
– Pointing outwards
Feb 20 2002
Electric Flux
• What if E not constant on surface A?
• Use integral
• Often, ‘closed’ surfaces
Feb 20 2002
Gauss’ Law
• Connects Flux through closed surface and
charge inside this surface:
Note: k = 1/ 4 π ε0
Feb 20 2002
Gauss’ Law
• True for ANY closed surface around Qencl
• Suitable choice of surface A can make integral
very simple
Feb 20 2002
Use the Symmetry!
+Q
dA
+Q
Point Charge
Spherical Surface
Feb 20 2002
Use the Symmetry!
+
+
++
+ +
+
+
+
r0
+
++
+ +
+
+
+
+
+
+
+
+
A1
+
+
++
+ +
+
+
+
+
+
r0
+
++
+ +
+
A2
+
+
+
+
+
Charged Sphere
Spherical Surfaces
Feb 20 2002
Use the Symmetry!
A1
+
+
++
+ +
+
+
+
+
+
r0
+
++
+ +
+
A2
+
+
+
+
+
Spherical Surfaces
Feb 20 2002
Use the Symmetry!
+
+++++
+
+
+
++
+
+ +
+
+
Charged Line
Cylindrical Surface
Feb 20 2002
Hollow conducting Sphere
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Feb 20 2002
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Last example
+ + + + + + +
+ + + + + + +
Feb 20 2002
Far | https://ocw.mit.edu/courses/8-02x-physics-ii-electricity-magnetism-with-an-experimental-focus-spring-2005/06015ba0c4ce73e69f14f44a1714cfcd_2_20_2002_edited.pdf |
+
+
+
+
+
Last example
+ + + + + + +
+ + + + + + +
Feb 20 2002
Faraday Cage
Hollow Metal Sphere
+
+
+
+
+
+
+
+
+
+
- -
- --
HV
Feb 20 2002
Van der Graaf Generator
Figure by MIT OCW.
Faraday Cage
Hollow Metal Sphere
+
++
+
+
+++
+
Large E; E~1/r2
Feb 20 2002
Van der Graaf Generator
Figure by MIT OCW.
‘Challenge’ In-Class Demo
Feb 20 2002 | https://ocw.mit.edu/courses/8-02x-physics-ii-electricity-magnetism-with-an-experimental-focus-spring-2005/06015ba0c4ce73e69f14f44a1714cfcd_2_20_2002_edited.pdf |
Lecture 2
8.251 Spring 2007
Lecture 2 - Topics
• Energy and momentum
• Compact dimensions, orbifolds
• Quantum mechanics and the square well
Reading: Zwiebach, Sections: 2.4 - 2.9
1
x± = √
2
(x 0 ± x 1)
x+ l.c. time
Leave x2 and x3 untouched.
−ds2 = −(dx0)2 + (dx1)2 + (dx2)2 + (dx3)2
= ηµvdxµdxv
u, v = 0, 1, 2, 3
2dx+dx− = (dx0 + dx1)(dx0 − dx1)
= (dx0)2 − (dx1)2
−ds2 = −2dx+dx− + (dx2)2 + (dx3)2
= ηˆµvdxµdxv
u, v = +, −, 2, 3
⎡
ηˆµν = ⎢
⎢
⎣
0 −1 0
0
0
−1
1
0
0
0
0
0
⎤
⎥
⎥
⎦
0
0
0
1
1
Lecture 2
8.251 Spring 2007
ηˆ++ = ηˆ−− = ηˆ+I = ηˆ = −I
I = 2, 3
ηˆ+− = ηˆ−+ = −1
η22 = η33 = 1
Given vector aµ, transform to:
1
a± := √
2
(a 0 ± a 1)
Einstein’s equations in 3 space-time dimensions are great. But 2 dimensional
space is not enough for life. Luckily, it works also in 4 dimensions (d5, d6, ...).
Why don’t we live with 4 space dimensions?
If we lived with 4 space dimesnions | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
dimensions (d5, d6, ...).
Why don’t we live with 4 space dimensions?
If we lived with 4 space dimesnions, planetary orbits wouldn’t be stable (which
would be a problem!)
Maybe there’s an extra dimension where we can unify gravity and ...
Maybe if so, then the extra dimensions would have to be very small – too small
to see.
String theory has extra dimensions and makes theory work. Though caution:
this is a pretty big leap.
Trees in a Box
Look at trees in a box
Move a little and see another behind it
2
Lecture 2
8.251 Spring 2007
In fact, see ∞ row that are all identical! Leaves fall identically and everything.
Dot Product
a b = −a◦b◦ +
·
a ibi
3
�
i=1
= −a +b− − a−b+ + a 2b2 + a 3b3
= ηˆµν aµbν
aµ = ηˆµν a ν
a+ = ηˆ+ν a ν = ηˆ+−a− = −a−
a+ = −a−
a− = −a +
Light rays a bit like in Galilean physics - go from 0 to ∞.
vlc =
dx−
dx+
3
Lecture 2
8.251 Spring 2007
Energy and Momentum
µ
Event 1 at x
Event 2 at xµ + dxµ (after some positive time change)
dxµ is a Lorentz vector
The dimension along the room, row is actually a circle with one tree, so not
actually infinity.
See light rayws that goes around circle multiple times to see multiple trees.
Crazy way to define a circle
This circle is a topological circle - no “center”, no “radius”
Identify two points, P1 and P2. Say the same (P1 ≈ P2) if and only if x(P1) =
x(P2) + (2πR | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
. Say the same (P1 ≈ P2) if and only if x(P1) =
x(P2) + (2πR)n (n ∈ Z)
Write as:
x ≈ x + (2πR)n
Define: Fundamental Domain = a region sit.
1. No two points in it are identified
2. Every point in the full space is either in the fundamental domain or has a
representation in the fundamental domain.
So on our x line, we would have:
4
Lecture 2
8.251 Spring 2007
−ds2 = −c 2dt2 + (d�x)2
= −c 2dt2 + v 2(dt)2
= −c 2(1 − β2)(dt)2
ds2 is a positive value so can take square root:
�
ds = 1 − β2dt
In to co-moving Lorentz frame, do same computation and find:
−ds2 = −c 2(dtp)2 + (d�x)2 = −c 2(dtp)2
dtp: Proper time moving with particle. Also greater than 0.
Define velocity u-vector:
ds = cdtp
dxµ
ds
= Lorentz Vector
uµ =
cdcxµ
dx
Definite momentum u-vector:
pµ = muµ = �
dxµ
m
1 − β2 dt
= mγ
dxµ
dt
γ = �
1
1 − β2
Rule to get the space we’re trying to construct:
Take the f d, include its boundary, and apply the identification
·
5
Lecture 2
8.251 Spring 2007
Note: Easy to get mixed up if rule not followed carefully.
Consider �2 with 2 identifications:
(x, y) ≈ (x + L1, y)
(x, y) ≈ (x, y + L2)
Blue: Fundamental domain for first identification
Red: Fundamental domain for second identifi | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
y + L2)
Blue: Fundamental domain for first identification
Red: Fundamental domain for second identification
6
Lecture 2
8.251 Spring 2007
�
�
pµ = mγ
dx0 d�x
,
dt dt
= (mcγ, mγ�v)
�
�
E
c
, �p
=
E: relativistic energy = µc√
2
1−β2
�
p: relativistic momentum
Scalar:
µp
= −
+ �p 2
pµ = (p 0)2 + (p�)2
·
E2
c2
m c
1 − β2
�
= −
2 2
+
= −m
2 2 1 − β2
c
1 − β2
2 2
m v
1 − β2
�
= −m
2 2 c
Every observer agrees on this value.
Light Lone Energy
x0 = time, E
c = p0
+x = time, E
lc = p+? –¿ Nope!
c
Justify using QM: Ψ(t, �x) = e
−i (Et−p�0�x)
h
Can think of the IDs as transformations - points “move.” Here’s something that
“moves” some points but not all.
Orbfolds
1.
ID: x ≈ −x
FD:
7
Lecture 2
8.251 Spring 2007
Think of ID as transformation x → −x
This FD not a normal 1D manifold since origin is fixed. Call this half time �/Zz
the quotient.
2.
ID: x ≈ x rotated about origin by 2π/n
In polar coordinates:
z = x + iy
� �
2πi z
n
z ≈ e
Fundamental domain can be chosen to be:
8
Lecture 2
8.251 Spring 2007
Cone!
We focus on these two since quite solvable in string theory.
SE:
p�ˆ = h�/ | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
2007
Cone!
We focus on these two since quite solvable in string theory.
SE:
p�ˆ = h�/i
ih
∂Ψ E
= Ψ
∂x0
c
ih ∂
c ∂t
Ψ = EΨ
So for our x+, want ih ∂Ψ
∂x+ = ElccΨ
�
Et − �p �x = − − ct + p� �x
·
�
·
E
c
= −p · x
= −(p+x + p−x− + . . .)
Now have isolated dependence on x+, so can take derivative:
So:
Ψ = e
+i
h
(p+x + + . . .)
ih
∂Ψ
∂x+
= −p+Ψ
Elc − p+ = p−
=
Suppose have line segment of length a. Particle constrained to this:
9
Lecture 2
8.251 Spring 2007
Compare to physics of world with particle constrained to thin cylinder of radius
R and length a (2D)
Can be defined as:
with ID (x, y) ≈ (x, y + 2πR)
So:
1.
2.
SE =
�
−h2
∂2
2m ∂x2
+
∂2
∂y2
�
= EΨ
Ψk =
�
���� sin
kπx
a
�
Ek =
� �2
h2 kπ
2m a
Ψ� k,l =
Ψk,l =
�
���� sin
�
���� sin
kπx
a
kπx
a
� � �
cos
� � �
sin
ly
R
ly
R
If states with l = 0 then get same states as case 1, but if l = 0 get different E
� �2
l
value from R
contribution. Only noticeable at very high temperatures | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
get different E
� �2
l
value from R
contribution. Only noticeable at very high temperatures.
10
� | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
6.852: Distributed Algorithms
Fall, 2009
Class 6
Today’s plan
f+1-round lower bound for stopping agreement, cont’d.
•
• Various other kinds of consensus problems in synchronous
networks:
– k-agreement
– Approximate agreement (skip)
– Distributed commit
• Reading:
– [Aguilera, Toueg]
– [Keidar, Rajsbaum]
– Chapter 7 (skip 7.2)
• Next:
– Modeling asynchronous systems
– Chapter 8
Lower Bound on Rounds
• Theorem 1: Suppose n ≥ f + 2. There is no n-process f-
fault stopping agreement algorithm in which nonfaulty
processes always decide at the end of round f.
• Old proof: Suppose A exists.
– Construct a chain of executions, each with at most f failures, where:
• First has decision value 0, last has decision value 1.
• Any two consecutive executions are indistinguishable to some process i
that is nonfaulty in both.
– So decisions in first and last executions are the same, contradiction.
– Must fail f processes in some executions in the chain, in order to
remove all the required messages, at all rounds.
– Construction in book, LTTR.
• Newer proof [Aguilera, Toueg]:
– Uses ideas from [Fischer, Lynch, Paterson], impossibility of
asynchronous consensus.
[Aguilera, Toueg] proof
• By contradiction. Assume A solves stopping agreement
for f failures and everyone decides after exactly f rounds.
• Consider only executions in which at most one process
fails during each round.
• Recall failure at a round allows process to miss sending
any subset of the messages, or to send all but halt
before changing state.
• Regard vector of initial values as a 0-round execution.
• Defs (adapted from [FLP]): α, an execution that
completes some finite number (possibly 0) of rounds, is:
– 0-valent, if 0 is the only decision that can occur in any execution
(of the kind we consider) that extends α.
– 1-valent, if 1 is…
– Univalent, if α is either 0-valent or 1-valent (essentially decided).
– Bivalent, if both decisions occur in some extensions (undecided | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
if α is either 0-valent or 1-valent (essentially decided).
– Bivalent, if both decisions occur in some extensions (undecided).
Univalence and Bivalence
α
α
α
0
0
0
1
1
1
0
1
1
0-valent
1-valent
bivalent
α univalent
Initial bivalence
• Lemma 1: There is some 0-round execution
(vector of initial values) that is bivalent.
• Proof (from [FLP]):
– Assume for contradiction that all 0-round executions
are univalent.
– 000…0 is 0-valent.
– 111…1 is 1-valent.
– So there must be two 0-round executions that differ in
the value of just one process, i, such that one is 0-
valent and the other is 1-valent.
– But this is impossible, because if i fails at the start, no
one else can distinguish the two 0-round executions.
Bivalence through f-1 rounds
• Lemma 2: For every k, 0 ≤ k ≤ f-1, there is a bivalent k-
round execution.
• Proof: By induction on k.
– Base: Lemma 1.
– Inductive step: Assume for k, show for k+1, where k < f -1.
• Assume bivalent k-round execution α.
• Assume for contradiction that every 1-round
extension of α (with at most one new failure)
is univalent.
• Let α* be the 1-round extension of α in
which no new failures occur in round k+1.
• By assumption, α * is univalent, WLOG 1-
valent.
α
α*
α0
round k+1
• Since α is bivalent, there must be another 1-
round extension of α, α0, that is 0-valent.
1-valent
0-valent
Bivalence through f-1 rounds
•
In α0, some single process, say i, fails in
round k+1, by not sending to some set of
processes, say J = {j1, j2,…jm}.
• Define a chain of (k+1)-round executions,
α0, α1, α2,…, αm.
• Each αl in this sequence | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
a chain of (k+1)-round executions,
α0, α1, α2,…, αm.
• Each αl in this sequence is the same as α0
except that i also sends messages to j1,
j2,…jl.
– Adding in messages from i, one at a time.
α
α*
α0
round k+1
1-valent
0-valent
• Each αl is univalent, by assumption.
• Since α0 is 0-valent, either:
– At least one of these is 1-valent, or
– All are 0-valent.
Case 1: At least one αl is 1-valent
• Then there must be some l such that αl-1 is 0-
valent and αl is 1-valent.
• But αl-1 and αl differ after round k+1 only in the
state of one process, jl.
• We can extend both αl-1 and αl by simply failing jl
at beginning of round k+2.
– There is actually a round k+2 because we’ve
assumed k < f-1, so k+2 ≤ f.
• And no one left alive can tell the difference!
• Contradiction for Case 1.
Case 2: Every αl is 0-valent
• Then compare:
– αm, in which i sends all its round k+1 messages and
then fails, with
– α* , in which i sends all its round k+1 messages and
does not fail.
• No other differences, since only i fails at round k+1
in αm.
• αm is 0-valent and α* is 1-valent.
• Extend to full f-round executions:
– αm, by allowing no further failures,
– α*, by failing i right after round k+1 and then allowing no
further failures.
• No one can tell the difference.
• Contradiction for Case 2.
Bivalence through f-1 rounds
• So we’ve proved, so far:
• Lemma 2: For every k, 0 ≤ k ≤ f-1, there is
a bivalent k-round execution.
Disagreement after f rounds
• Lemma 3: There is an f-round execution in which two
nonfaulty processes decide differently.
• Proof:
– Use | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
reement after f rounds
• Lemma 3: There is an f-round execution in which two
nonfaulty processes decide differently.
• Proof:
– Use Lemma 2 to get a bivalent (f-1)-round execution α
with ≤ f-1 failures.
– In every 1-round extension of α, everyone who hasn’t
failed must decide (and agree).
– Let α* be the 1-round extension of α in which no new
failures occur in round f.
α
– Everyone who is still alive decides after α*, and they
must decide the same thing. WLOG, say they decide 1.
α*
α0
– Since α is bivalent, there must be another 1-round
extension of α, say α0, in which some nonfaulty process
(and so, all nonfaulty processes) decide 0.
round f
decide 1
decide 0
Disagreement after f rounds
In α0, some single process i fails in round f.
•
• Let j, k be two nonfaulty processes.
• Define a chain of three f-round executions, α0, α1, α*,
where α1 is identical to α0 except that i sends to j in α1
(it might not in α0).
α
• Then α1 ~k α0.
• Since k decides 0 in α0, k also decides 0 in α1.
• Also, α1 ~j α*.
• Since j decides 1 in α*, j also decides 1 in α1.
• Yields disagreement in α1, contradiction!
α*
α0
round f
decide 1
decide 0
• So we’ve proved:
• Lemma 3: There is an f-round execution in which two nonfaulty
processes decide differently.
• Which immediately yields the lower bound result.
Early-stopping agreement algorithms
• Tolerate f failures in general, but in executions with f′ < f
•
•
failures, terminate faster.
[Dolev, Reischuk, Strong 90] Stopping agreement
algorithm in which all nonfaulty processes terminate in ≤
min(f′ + 2, f+1) rounds.
– If f′ + 2 ≤ f, decide “early”, within f′ + 2 rounds; in any case decide
within f+1 rounds.
[Ke | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
′ + 2 ≤ f, decide “early”, within f′ + 2 rounds; in any case decide
within f+1 rounds.
[Keidar, Rajsbaum 02] Lower bound of f′ + 2 for early-
stopping agreement.
– Not just f′ + 1. Early stopping requires an extra round.
• Theorem 2: Assume 0 ≤ f′ ≤ f – 2 and f < n. Every early-
stopping agreement algorithm tolerating f failures has an
execution with f′ failures in which some nonfaulty process
doesn’t decide by the end of round f′ + 1.
Just consider special case: f′ = 0
• Theorem 3: Assume 2 ≤ f < n. Every early-stopping
agreement algorithm tolerating f failures has a failure-free
execution in which some nonfaulty process does not decide
by the end of round 1.
• Definition: Let α be an execution that completes some
finite number (possibly 0) of rounds. Then val(α) is the
unique decision value in the extension of α with no new
failures.
• Proof of Theorem 3:
– Assume executions in which at most one process fails per round.
– Identify 0-round executions with vectors of initial values.
– Assume, for contradiction, that everyone decides by round 1, in all
failure-free executions.
– val(000…0) = 0, val(111…1) = 1.
– So there must be two 0-round executions α0 and α1, that differ in the
value of just one process i, such that val(α0) = 0 and val(α1) = 1.
Special case: f′ = 0
• 0-round executions α0 and α1, differing only in the initial value of
process i, such that val(α0) = 0 and val(α1) = 1.
In failure-free extensions of α0, α1, all processes decide in one round.
•
• Define:
– β0, 1-round extension of α0, in which process i fails, sends only to j.
– β1, 1-round extension of α1, in which process i fails, sends only to j.
• Then:
– β0 looks to j like ff extension of α0, so j | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
1, in which process i fails, sends only to j.
• Then:
– β0 looks to j like ff extension of α0, so j decides 0 in β0 after 1 round.
– β1 looks to j like ff extension of α1, so j decides 1 in β1 after 1 round.
• β0 and β1 are indistinguishable to all processes except i, j.
• Define:
– γ 0, infinite extension of β0, in which process j fails right after round 1.
– γ 1, infinite extension of β1, in which process j fails right after round 1.
• By agreement, all nonfaulty processes must decide 0 in γ 0, 1 in γ 1.
• But γ 0 and γ 1 are indistinguishable to all nonfaulty processes, so they
can’t decide differently, contradiction.
k-Agreement
k-agreement
• Usually called k-set agreement or k-set
consensus.
• Generalizes ordinary stopping agreement by
allowing k different decisions instead of just one.
• Motivation:
– Practical:
• Allocating shared resources, e.g., agreeing on small number
of radio frequencies to use for sending/receiving broadcasts.
– Mathematical:
• Natural generalization of ordinary 1-agreement.
• Elegant theory: Nice topological structure, tight bounds.
The k-agreement problem
• Assume:
– n-node complete undirected graph
– Stopping failures only
– Inputs, decisions in finite totally-ordered set V (appear
in state variables).
• Correctness conditions:
– Agreement:
• ∃ W ⊆ V, |W| = k, all decision values in W.
• That is, there are at most k different decision values.
– Validity:
• Any decision value is some process’ initial value.
• Like strong validity for 1-agreement.
– Termination:
• All nonfaulty processes eventually decide.
FloodMin k-agreement algorithm
• Algorithm:
– Each process remembers the min value it has seen,
initially its own value.
– At each round, broadcasts its min value.
– Decide after some generally-agreed-upon number of
rounds, on current min value.
• Q: How many rounds are enough?
• 1-agreement: f+1 rounds
– Argument like those for previous stopping agreement
algorithms.
• k-ag | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
How many rounds are enough?
• 1-agreement: f+1 rounds
– Argument like those for previous stopping agreement
algorithms.
• k-agreement: ⎣f/k⎦ + 1 rounds.
• Allowing k values divides the runtime by k.
FloodMin correctness
• Theorem 1: FloodMin, for ⎣f/k⎦ + 1 rounds, solves k-
agreement.
• Proof:
• Define M(r) = set of min values of active (not-yet-failed)
processes after r rounds.
• This set can only decrease over time:
• Lemma 1: M(r+1) ⊆ M(r) for every r, 0 ≤ r ≤ ⎣f/k⎦.
• Proof: Any min value after r+1 is someone’s min value
after r.
Proof of Theorem 1, cont’d
• Lemma 2: If at most d-1 processes fail during round r,
then |M(r)| ≤ d.
• E.g., for d = 1: If no one fails during round r then all have
the same min value after r.
• Proof: Show contrapositive.
– Suppose that |M(r)| > d, show at least d processes fail in round r.
– Let m = max (M(r)).
– Let m′ < m be any other element of M(r).
– Then m′ ∈ M(r-1) by Lemma 1.
– Let i be a process active after r-1 rounds that has m′ as its min
value after r-1 rounds.
– Claim i fails in round r:
• If not, everyone would receive m; in round r.
• Means that no one would choose m > m′ as its min, contradiction.
– But this is true for every m′ < m in M(r), so at least d processes
fail in round r.
Proof of Theorem 1, cont’d
• Validity: Easy
• Termination: Obvious
• Agreement: By contradiction.
– Assume an execution with > k different decision values.
– Then the number of min values for active processes after the full
⎣f/k⎦ + 1 rounds is > k.
– That is, |M(⎣f/k⎦ + 1)| > k.
– Then by Lemma 1, |M(r)| | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
That is, |M(⎣f/k⎦ + 1)| > k.
– Then by Lemma 1, |M(r)| > k for every r, 0 ≤ r ≤ ⎣f/k⎦+1.
– So by Lemma 2, at least k processes fail in each round.
– That’s at least (⎣f/k⎦+1) k total failures, which is > f failures.
– Contradiction!
Lower Bound (sketch)
• Theorem 2: Any algorithm for k-agreement requires ≥ ⎣f/k⎦ + 1 rounds.
• Recall old proof for f+1-round lower bound for 1-agreement.
– Chain of executions for assumed algorithm:
α0 ----- α1 ----- …-----αj -----αj+1 ----- …-----αm
– Each execution has a unique decision value.
– Executions at ends of chain have specified decision values.
– Two consecutive executions look the same to some nonfaulty process,
who (therefore) decides the same in both.
• Argument doesn’t extend immediately to k-agreement:
– Can’t assume a unique value in each execution.
– Example: For 2-agreement, could have 3 different values in 2
consecutive executions without violating agreement.
•
Instead, use a k-dimensional generalized chain.
Lower bound
• Assume, for contradiction:
– n-process k-agreement algorithm tolerating f failures.
– All processes decide just after round r, where r ≤ ⎣f/k⎦.
– All-to-all communication at all rounds.
– n ≥ f + k + 1 (so each execution we consider has at least k+1
nonfaulty processes)
– V = {0,1,…,k}, k+1 values.
• Get contradiction by proving
existence of an execution with ≥ k + 1
different decision values.
• Use k-dimensional collection of
executions rather than 1-dimensional.
– k = 2: Triangle
– k = 3: Tetrahedron, etc.
Labeling nodes with executions
Initial values
All 0s
Os and 2s
Os and 1s
• Bermuda Triangle (k = 2): Any
algorithm vanishes somewhere in
the interior.
• Label nodes with executions:
– Corner: No failures, all have same
initial value.
– Boundary | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
algorithm vanishes somewhere in
the interior.
• Label nodes with executions:
– Corner: No failures, all have same
initial value.
– Boundary edge: Initial values
chosen from those of the two
endpoints
– For k > 2, generalize to boundary
faces.
– Interior: Mixture of inputs
• Label so executions “morph
gradually” in all directions:
• Difference between two adjacent
executions along an outer edge:
– Remove or add one message, to a
process that fails immediately.
– Fail or recover a process.
– Change initial value of failed
process.
All 2s
1s and 2s
All 1s
Labeling nodes with
process names
• Also label each node with the name of a process that is nonfaulty in
the node’s execution.
• Consistency: For every tiny triangle (simplex) T, there is a single
execution β, with at most f faults, that is “compatible” with the
executions and processes labeling the corners of T:
– All the corner processes are nonfaulty in β.
– If (α′,i) labels some corner of T, then α′ is indistinguishable by i from β.
• Formalizes the “gradual morphing” property.
• Proof by laborious construction.
• Can recast chain arguments for 1-agreement in this style:
β
α0 ----- α1 ----- …----- αj ----- αj+1 ----- …----- αm
pj+1 pm
p0
p1 …. pj
– β indistinguishable by pj from αj
– β indistinguishable by pj+1 from αj+1
Bound on rounds
• This labeling construction uses the assumption r
≤ ⎣f / k⎦, that is, f ≥ r k.
• How:
– We are essentially constructing chains simultaneously
in k directions (2 directions, in the Bermuda Triangle).
– We need r failures (one per round) to construct the
“chain” in each direction.
– For k directions, that’s r k total failures.
• Details LTTR (see book, or paper [Chaudhuri,
Herlihy, Lynch, Tuttle])
Coloring the nodes
• Now color each node v with a
“color” in {0,1,…,k}:
– If v is labeled | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
Coloring the nodes
• Now color each node v with a
“color” in {0,1,…,k}:
– If v is labeled with (α,i) then
color(v) = i’s decision value in α.
• Properties:
– Colors of the major corners are
all different.
– Color of each boundary edge
node is the same as one of the
endpoint corners.
– For k > 2, generalize to
boundary faces.
• Coloring properties follow from
Validity, because of the way the
initial values are assigned.
All 0s
Os and 2s
Os and 1s
All 2s
1s and 2s
All 1s
Sperner Colorings
All 0s
Os and 2s
Os and 1s
• A coloring with the listed
properties (suitably
generalized to k dimensions)
is called a “Sperner Coloring”
(in algebraic topology).
• Sperner’s Lemma: Any
Sperner Coloring has some
tiny triangle (simplex) whose
k+1 corners are colored by
all k+1 colors.
• Find one?
All 2s
1s and 2s
All 1s
Applying Sperner’s Lemma
• Apply Sperner’s Lemma to the coloring we constructed.
• Yields a tiny triangle (simplex) T with k+1 different colors on its
corners.
• Which means k+1 different decision values for the executions and
processes labeling its corners.
• But consistency for T yields a single execution β, with at most f
faults, that is “compatible” with the executions and processes
labeling the corners of T:
– All the corner processes are nonfaulty in β.
– If (α′,i) labels some corner of T, then α′ is indistinguishable by i from β.
• So all the corner processes behave the same in β as they do in their
own corner executions, and decide on the same values as in those
executions.
• That’s k+1 different decision values in one execution with at most f
faults.
• Contradicts k-agreement.
Approximate Agreement
Approximate Agreement problem
• Agreement on real number values:
– Readings of several altimeters on an aircraft.
– Values of approximately-synchronized clocks.
• Consider with Byzantine participants, | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
Agreement on real number values:
– Readings of several altimeters on an aircraft.
– Values of approximately-synchronized clocks.
• Consider with Byzantine participants, e.g., faulty hardware.
• Abstract problem:
– Inputs, outputs are reals
– Agreement: Within ε.
– Validity: Within range of initial values of nonfaulty processes.
– Termination: Nonfaulty eventually decide.
• Assumptions: Complete n-node graph, n > 3f.
• Could solve by exact BA, using f+1 rounds and lots of
communication.
• But better algorithms exist:
– Simpler, cheaper
– Extend to asynchronous settings, whereas BA is unsolvable in
asynchronous networks.
Approximate agreement algorithm
[Dolev, Lynch, Pinter, Stark, Weihl]
• Use convergence strategy, successively narrowing the
interval of guesses of the nonfaulty processes.
– Take an average at each round.
– Because of Byzantine failures, need fault-tolerant average.
• Maintain val, latest estimate, initially initial value.
• At every round:
– Broadcast val, collect received values into multiset W.
– Fill in missing entries with any values.
– Calculate W′ = reduce(W), by discarding f largest and f smallest
elements.
– Calculate W″ = select(W′), by choosing the smallest value in W′
and every f’th value thereafter.
– Reset val to mean(W″).
Example: n = 4, f = 1
Initial values: 1, 2, 3, 4
•
• Process 3 faulty, sends:
proc 1: 2 proc. 2: 100 proc 3: -100
• Process 1:
– Receives (1, 2, 2, 4), reduces to (2, 2), selects (2, 2), mean = 2.
• Process 2:
– Receives (1, 2, 100, 4), reduces to (2, 4), selects (2, 4), mean = 3.
• Process 4:
– Receives (1, 2, -100, 4), reduces to (1, 2), selects (1, 2), mean =
1.5.
One-round guarantees
• Lemma 1: Any nonfaulty process’ val after the round is | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
, 2), mean =
1.5.
One-round guarantees
• Lemma 1: Any nonfaulty process’ val after the round is in the range
of nonfaulty processes’ vals before the round.
• Proof: All elements of reduce(W) are in this range, because there
are at most f faults, and we discard the top and bottom f values.
• Lemma 2: Let d be the range of nonfaulty processes’ vals just
before the round. Then the range of nonfaulty processes’ vals after
the round is at most d / (⎣(n – (2f+1)) / f⎦ + 1).
• That is:
– If n = 3f + 1, then the new range is d / 2.
– If n = kf + 1, k ≥ 3, then the new range is d / (k -1).
• Proof: Calculations, in book.
• Example: n = 4, f = 1
– Initial vals: 1, 2, 3, 4, range is 3.
– Process 3 faulty, sends 2 to proc 1, 100 to proc 2, -100 to proc 3.
– New vals of nonfaulty processes: 2, 3, 1.5
– New range is 1.5.
The complete algorithm
• Just run the 1-round algorithm repeatedly.
• Termination: Add a mechanism, e.g.:
– Each node individually determines a round by which it knows
that the vals of nonfaulty processes are all within ε.
• Collect first round vals, predict using known convergence rate.
– After the determined round, decide locally.
– Thereafter, send the decision value.
• Upsets the convergence calculation.
• But that doesn’t matter because the vals are already within ε.
• Remarks:
– Convergence rate can be improved somewhat by using 2-round
blocks [Fekete].
– Algorithm extends easily to asynchronous case, using an
“asynchronous round” structure we’ll see later.
Distributed Commit
Distributed Commit
• Motivation: Distributed database transaction processing
– A database transaction performs work at several distributed sites.
– Transaction manager (TM) at each site decides whether it would
like to “commit” or “abort” the transaction.
• Based on whether the transaction’s work | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
manager (TM) at each site decides whether it would
like to “commit” or “abort” the transaction.
• Based on whether the transaction’s work has been successfully
completed at that site, and results made stable.
– All TMs must agree on whether to commit or abort.
• Assume:
– Process stopping failures only.
– n-node, complete, undirected graph.
• Require:
– Agreement: No two processes decide differently (faulty or not,
uniformity)
– Validity:
• If any process starts with 0 (abort) then 0 is the only allowed decision.
• If all start with 1 (commit) and there are no faulty processes then 1 is
the only allowed decision.
Correctness Conditions for Commit
• Agreement: No two processes decide differently.
• Validity:
– If any process starts with 0 then 0 is the only allowed decision.
– If all start with 1 and there are no faulty processes then 1 is the
only allowed decision.
– Note the asymmetry: Guarantee abort (0) if anyone wants to
abort; guarantee commit (1) if everyone wants to commit and no
one fails (best case).
• Termination:
– Weak termination: If there are no failures then all processes
eventually decide.
– Strong termination (non-blocking condition): All nonfaulty
processes eventually decide.
2-Phase Commit
• Traditional, blocking algorithm
(guarantees weak termination only).
• Assumes distinguished process 1,
acts as “coordinator” (leader).
• Round 1: All send initial values to
process 1, who determines the
decision.
p1
p2
p3
p4
• Round 2: Process 1 sends out the
decision.
• Q: When can each process actually decide?
• Anyone with initial value 0 can decide at the beginning.
• Process 1 decides after receiving round 1 messages:
– If it sees 0, or doesn’t hear from someone, it decides 0; otherwise
decides 1.
• Everyone else decides after round 2.
Correctness of 2-Phase Commit
• Agreement:
– Because decision is centralized (and
consistent with any individual initial
decisions).
• Validity:
– Because of how the coordinator decides.
• Weak termination:
– If no one fails, everyone terminates by end of
round 2.
• Strong termination?
– No: | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
the coordinator decides.
• Weak termination:
– If no one fails, everyone terminates by end of
round 2.
• Strong termination?
– No: If coordinator fails before sending its
round 2 messages, then others with initial
value 1 will never terminate.
Add a termination protocol?
• We might try to add a termination
protocol: other processes try to detect
failure of coordinator and finish
agreeing on their own.
• But this can’t always work:
– If initial values are 0,1,1,1, then by validity,
others must decide 0.
– If initial values are 1,1,1,1 and process 1
fails just after deciding, and before sending
out its round 2 messages, then:
• By validity, process 1 must decide 1.
• By agreement, others must decide 1.
– But the other processes can’t distinguish
these two situations.
0
1
1
1
1
1
1
1
Complexity of 2-phase commit
• Time:
– 2 rounds
• Communication:
– At most 2n messages
3-Phase Commit [Skeen]
• Yields strong termination.
• Trick: Introduce intermediate stage, before actually
deciding.
• Process states classified into 4 categories:
– dec-0: Already decided 0.
– dec-1: Already decided 1.
– ready: Ready to decide 1 but hasn’t yet.
– uncertain: Otherwise.
• Again, process 1 acts as “coordinator”.
• Communication pattern:
p1
3-Phase Commit
• All processes initially uncertain.
• Round 1:
– All other processes send their initial values to p1.
– All with initial value 0 decide 0 (and enter dec-0 state)
– If p1 receives 1s from everyone and its own initial value is 1, p1
becomes ready, but doesn’t yet decide.
– If p1 sees 0 or doesn’t hear from someone, p1 decides 0.
• Round 2:
– If p1 has decided 0, broadcasts “decide 0”, else broadcasts “ready”.
– Anyone else who receives “decide 0” decides 0.
– Anyone else who receives “ready” becomes ready.
– Now p1 decides 1 if it hasn’t already decided.
• Round 3: | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
0.
– Anyone else who receives “ready” becomes ready.
– Now p1 decides 1 if it hasn’t already decided.
• Round 3:
– If p1 has decided 1, bcasts “decide 1”.
– Anyone else who receives “decide 1”
decides 1.
3-Phase Commit
• Key invariants (after 0, 1, 2, or 3 rounds):
– If any process is in ready or dec-1, then all processes have initial value 1.
– If any process is in dec-0 then:
• No process is in dec-1, and no non-failed process is ready.
– If any process is in dec-1 then:
• No process is in dec-0, and no non-failed process is uncertain.
• Proof: LTTR.
– Key step: Third condition is preserved when p1 decides 1 after round 2.
– In this case, p1 knows that:
• Everyone’s input is 1.
• No one decided 0 at the end of round 1.
• Every other process has either become ready or has failed (without deciding).
– Implies third condition.
• Note critical use of synchrony here:
– p1 infers that non-failed processes are ready just because round 2 is
completed.
– Without synchrony, would need positive acknowledgments.
Correctness conditions (so far)
• Agreement and validity follow, for these three
rounds.
• Weak termination holds
• Strong termination:
– Doesn’t hold yet---must add a termination protocol.
– Allow process 2 to act as coordinator, then 3,…
– “Rotating coordinator” strategy
3-Phase Commit
• Round 4:
– All processes send current decision status (dec-0, uncertain, ready, or dec-1) to p2.
– If p2 receives any dec-0’s and hasn’t already decided, then p2 decides 0.
– If p2 receives any dec-1’s and hasn’t already decided, then p2 decides 1.
– If all received values, and its own value, are uncertain, then p2 decides 0.
– Otherwise (all values are uncertain or ready and at least one is ready), p2 becomes
ready, but doesn’t decide yet.
• Round 5 (like round 2):
– If p1 has (ever) decided 0 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
2 becomes
ready, but doesn’t decide yet.
• Round 5 (like round 2):
– If p1 has (ever) decided 0, broadcasts “decide 0”, and similarly for 1.
– Else broadcasts “ready”.
– Any undecided process who receives “decide()” decides accordingly.
– Any process who receives “ready” becomes ready.
– Now p2 decides 1 if it hasn’t already decided.
• Round 6 (like round 3):
– If p2 has decided 1, broadcasts “decide 1”.
– Anyone else who receives “decide 1” decides 1.
• Continue with subsequent rounds for p3, p4,…
Correctness
• Key invariants still hold:
– If any process is in ready or dec-1, then all processes
have initial value 1.
– If any process is in dec-0 then:
• No process is in dec-1, and no non-failed process is ready.
– If any process is in dec-1 then:
• No process is in dec-0, and no non-failed process is
uncertain.
• Imply agreement, validity
• Strong termination:
– Because eventually some coordinator will finish the
job (unless everyone fails).
Complexity
• Time until everyone decides:
– Normal case 3
– Worst case 3n
• Messages until everyone decides:
– Normal case O(n)
• Technicality: When can processes stop sending
messages?
– Worst case O(n2)
Practical issues for 3-phase commit
• Depends on strong assumptions, which may be hard to
guarantee in practice:
– Synchronous model:
• Could emulate with approximately-synchronized clocks, timeouts.
– Reliable message delivery:
• Could emulate with acks and retransmissions.
• But if retransmissions add too much delay, then we can’t emulate
the synchronous model accurately.
• Leads to unbounded delays, asynchronous model.
– Accurate diagnosis of process failures:
• Get this “for free” in the synchronous model.
• E.g., 3-phase commit algorithm lets process that doesn’t hear from
another process i at a round conclude that i must have failed.
• Very hard to guarantee in practice: In Internet, or even a LAN, how
to reliably distinguish failure of a process from lost communication?
• Other consensus algorithms can be used for commit,
including some that don’t | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
a LAN, how
to reliably distinguish failure of a process from lost communication?
• Other consensus algorithms can be used for commit,
including some that don’t depend on such strong timing
and reliability assumptions.
Paxos consensus algorithm
• A more robust consensus algorithm, could be used for commit.
• Tolerates process stopping and recovery, message losses and
delays,…
• Runs in partially synchronous model.
• Based on earlier algorithm [Dwork, Lynch, Stockmeyer].
• Algorithm idea:
– Processes use unreliable leader election subalgorithm to choose
coordinator, who tries to achieve consensus.
– Coordinator decides based on active support from majority of processes.
– Does not assume anything based on not receiving a message.
– Difficulties arise when multiple coordinators are active---must ensure
consistency.
• Practical difficulties with fault-tolerance in the synchronous model
motivate moving on to study the asynchronous model (next time).
Next time…
• Modeling asynchronous systems
• Reading: Chapter 8
MIT OpenCourseWare
http://ocw.mit.edu
6.852J / 18.437J Distributed Algorithms
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.334 Power Electronics
Spring 2007
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .
Chapter 2
Introduction to Rectifiers
Read Chapter 3 of “Principles of Power Electronics” (KSV) by J. G. Kassakian, M.
F. Schlecht, and G. C. Verghese, Addison-Wesley, 1991.
Start with simple half-wave rectifier (full-bridge rectifier directly follows).
−
Ld
id
D1
D2
+
+
Vx
−
+
Vo
−
VsSin(ωt)
Vx
π
Id
ω t
2π
VsSin(ωt)
D1 ON
D2 ON
Figure 2.1: Simple Half-wave Rectifier
In P.S.S.:
< vo > = < vx >
vs
π
=
10
(2.1)
2.1. LOAD REGULATION
If Ld Big
id ≃
→
Id =
vs
πR
If L
d
R
2
π
ω
≫
⇒
we can approximate load as a constant current.
2.1 Load Regulation
Now consider adding some ac-side inductance Lc (reactance Xc = ωLc).
.
11
(2.2)
Common situation: Transformer leakage or line inductance, machine winding
•
inductance, etc.
Lc is typically
•
≪
Ld (filter inductance) as it is a parasitic element.
Lc
Ld
VsSin(ωt)
D1
D2
+
Vx
−
R
Figure 2.2: Adding Some AC-Side Inductance
Assume Ld ∼ ∞
(so ripple current is small). Therefore, we can approximate load
as a “special” current source.
“Special” since < vL | https://ocw.mit.edu/courses/6-334-power-electronics-spring-2007/0628c2bcc2c6922aa21b9f5145708dd9_chp2.pdf |
(so ripple current is small). Therefore, we can approximate load
as a “special” current source.
“Special” since < vL >= 0 in P.S.S.
Id =<
⇒
vx
R
>
(2.3)
Assume we start with D2 conducting, D1 off (V sin(ωt) < 0). What happens when
V sin(ωt) crosses zero?
12
CHAPTER 2. INTRODUCTION TO RECTIFIERS
Lc
i1
D1
VsSin(ωt)
D2
i2
Id
Figure 2.3: Special Current
D1 off no longer valid.
•
But just after turn on i1 still = 0.
•
Therefore, D1
and D2
are both on d
uring a commutation period, where current
switches from D2
to D1.
Lc
i1
D1
VsSin( ω t)
+
D2 Vx
_
i2
Id
Figure 2.4: Commutation Period
D2 will stay on as long as i2 > 0 (i1 < Id).
Analyze:
Vs sin(ωt)
di1
dt
=
i1(t) =
=
=
1
Lc
ωt Vs
0 ωLc
Z
Vs
ωLc
Vs
ωLc
[1
−
cos(Φ)
0
|ωt
cos(ωt)]
sin(ωt)d(ωt)
(2.4)
2.1. LOAD REGULATION
13
i1
Id
u
tω
Figure 2.5: Analyze Waveform
Commutation ends at ωt = u, when i1 = Id.
Commutation Period:
Id =
Vs
ωLc
[1
cos u]
⇒
cos u = 1
−
ωLcId
Vs
(2.5)
−
As compared to the case of no commutating induct | https://ocw.mit.edu/courses/6-334-power-electronics-spring-2007/0628c2bcc2c6922aa21b9f5145708dd9_chp2.pdf |
1
−
ωLcId
Vs
(2.5)
−
As compared to the case of no commutating inductance, we lose a piece of output
voltage during commutation. We can calculate the average output voltage in P.S.S.
from < Vx >:
< Vx > =
=
Vs sin(Φ)dΦ
π
1
2π u
Z
Vs
[cos(u) + 1]
2π
f rom bef ore cos(u) = 1
−
ωLcId
Vs
XcId
Vs
= 1
−
Vs
π
< Vx > =
So average output voltage drops with:
1. Increased current
[1
−
ωLcId
Vs
]
(2.6)
14
CHAPTER 2. INTRODUCTION TO RECTIFIERS
Vx
u
π
ω t
π2
π+u2
VsSin(wt)
i1
Id
u
π π
+u
2π
π
2 +u
D1
D2
ω t
D1+D2
Figure 2.6: Commutation Period
2. Increased frequency
3. Decreased source voltage
We get the “Ideal” no Lc case at no load.
We can make a dc-side thevenin model for such a system as shown in Figure 2.7.
No actual dissipation in box: “resistance” appears because output voltage drops
when current increases.
This Load Regulation is a major consideration in most rectifier systems.
Voltage changes with load.
Max output power limitation
•
•
2.1. LOAD REGULATION
15
Vs
π
+
−
+
Lcω
2 π
<Vx>
Id
−
<Vx>
Vs
π
−
ω Lc
2 π
slope
Id
2Vs
Lcω
Figure 2.7: DC-Side Thevenin Model
All due to non-zero commutation time because of ac-side reactance.
This e� | https://ocw.mit.edu/courses/6-334-power-electronics-spring-2007/0628c2bcc2c6922aa21b9f5145708dd9_chp2.pdf |
in Model
All due to non-zero commutation time because of ac-side reactance.
This effect
occurs in most rectifier types (full-wave, multi-phase, thyristor, etc.).
Full-bridge
rectifier has similar problem (similar analysis).
Read Chapter 4 of KSV.
Lc
D1
D2
D3
D4
VsSin( t)
ω
<Vx>
2Vs
π
Vs
π
+
Vx
Id
−
Full−Bridge
1/2−Bridge
Id
Vs
ω Lc
2Vs
ω Lc
Figure 2.8: Full-Bridge Rectifier | https://ocw.mit.edu/courses/6-334-power-electronics-spring-2007/0628c2bcc2c6922aa21b9f5145708dd9_chp2.pdf |
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
117
Chapter 4
State Machines
State machines are a method of modeling systems whose output depends on the entire history
of their inputs, and not just on the most recent input. Compared to purely functional systems,
in which the output is purely determined by the input, state machines have a performance that
is determined by its history. State machines can be used to model a wide variety of systems,
including:
• user interfaces, with typed input, mouse clicks, etc.;
•
conversations, in which, for example, the meaning of a word “it” depends on the history of
things that have been said;
•
•
the state of a spacecraft, including which valves are open and closed, the levels of fuel and
oxygen, etc.; and
the sequential patterns in DNA and what they mean.
State machine models can either be continuous time or discrete time. In continuous time models,
we typically assume continuous spaces for the range of values of inputs and outputs, and use
differential equations to describe the system’s dynamics. This is an interesting and important ap
proach, but it is hard to use it to describe the desired behavior of our robots, for example. The
loop of reading sensors, computing, and generating an output is inherently discrete and too slow
to be well-modeled as a continuous-time process. Also, our control policies are often highly non
linear and discontinuous. So, in this class, we will concentrate on discrete-time models, meaning
models whose inputs and outputs are determined at specific increments of time, and which are
synchronized to those specific time samples. Furthermore, in this chapter, we will make no as
sumptions about the form of the dependence of the output on the time-history of inputs; it can be
an arbitrary function.
Generally speaking, we can think of the job of an embedded system as performing a transduction
from a stream (infinite sequence) of input values to a stream of output values. In order to specify
the behavior of a system whose output depends on the history of its inputs mathematically, you
could think of trying to specify a mapping | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
order to specify
the behavior of a system whose output depends on the history of its inputs mathematically, you
could think of trying to specify a mapping from i1, . . . , it (sequences of previous inputs) to ot
(current output), but that could become very complicated to specify or execute as the history gets
longer. In the state-machine approach, we try to find some set of states of the system, which
capture the essential properties of the history of the inputs and are used to determine the current
output of the system as well as its next state. It is an interesting and sometimes difficult modeling
problem to find a good state-machine model with the right set of states; in this chapter we will
explore how the ideas of PCAP can aid us in designing useful models.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
118
One thing that is particularly interesting and important about state machine models is how many
ways we can use them. In this class, we will use them in three fairly different ways:
1. Synthetically: State machines can specify a “program” for a robot or other system embedded
in the world, with inputs being sensor readings and outputs being control commands.
2. Analytically: State machines can describe the behavior of the combination of a control system
and the environment it is controlling; the input is generally a simple command to the entire
system, and the output is some simple measure of the state of the system. The goal here is
to analyze global properties of the coupled system, like whether it will converge to a steady
state, or will oscillate, or will diverge.
3. Predictively: State machines can describe the way the environment works, for example, where
I will end up if I drive down some road from some intersection. In this case, the inputs are
control commands and the outputs are states of the external world. Such a model can be
used to plan trajectories through the space of the external world to reach desirable states, by
considering different courses of action and using the model to predict their results.
We will develop a single formalism, | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
world to reach desirable states, by
considering different courses of action and using the model to predict their results.
We will develop a single formalism, and an encoding of that formalism in Python classes, that
will serve all three of these purposes.
Our strategy for building very complex state machines will be, abstractly, the same strategy we
use to build any kind of complex machine. We will define a set of primitive machines (in this
case, an infinite class of primitive machines) and then a collection of combinators that allow us
to put primitive machines together to make more complex machines, which can themselves be
abstracted and combined to make more complex machines.
4.1 Primitive state machines
We can specify a transducer (a process that takes as input a sequence of values which serve as
inputs to the state machine, and returns as ouput the set of outputs of the machine for each input)
as a state machine (SM) by specifying:
• a set of states, S,
• a set of inputs, I, also called the input vocabulary,
• a set of outputs, O, also called the output vocabulary,
• a next-state function, n(it, st) (cid:55)→ st+1, that maps the input at time t and the state at time t to
the state at time t + 1,
• an output function, o(it, st) (cid:55)→ ot, that maps the input at time t and the state at time t to the
output at time t; and
• an initial state, s0, which is the state at time 0.
Here are a few state machines, to give you an idea of the kind of systems we are considering.
• A tick-tock machine that generates the sequence 1, 0, 1, 0, . . . is a finite-state machine that ig
nores its input.
• The controller for a digital watch is a more complicated finite-state machine: it transduces a
sequence of inputs (combination of button presses) into a sequence of outputs (combinations
of segments illuminated in the display).
• The controller for a bank of elevators in a large office building: it | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
combinations
of segments illuminated in the display).
• The controller for a bank of elevators in a large office building: it transduces the current set
of buttons being pressed and sensors in the elevators (for position, open doors, etc.) into
commands to the elevators to move up or down, and open or close their doors.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
119
The very simplest kind of state machine is a pure function: if the machine has no state, and the
output function is purely a function of the input, for example, ot = it + 1, then we have an
immediate functional relationship between inputs and outputs on the same time step. Another
simple class of SMs are finite-state machines, for which the set of possible states is finite. The
elevator controller can be thought of as a finite-state machine, if elevators are modeled as being
only at one floor or another (or possibly between floors); but if the controller models the exact
position of the elevator (for the purpose of stopping smoothly at each floor, for example), then it
will be most easily expressed using real numbers for the state (though any real instantiation of it
can ultimately only have finite precision). A different, large class of SMs are describable as linear,
time-invariant (LTI) systems. We will discuss these in detail chapter ??.
4.1.1 Examples
Let’s look at several examples of state machines, with complete formal definitions.
4.1.1.1 Language acceptor
Here is a finite-state machine whose output is true if the input string adheres to a simple pattern,
and false otherwise. In this case, the pattern has to be a, b, c, a, b, c, a, b, c, . . ..
It uses the states 0, 1, and 2 to stand for the situations in which it is expecting an a, | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
..
It uses the states 0, 1, and 2 to stand for the situations in which it is expecting an a, b, and c,
respectively; and it uses state 3 for the situation in which it has seen an input that was not the one
that was expected. Once the machine goes to state 3 (sometimes called a rejecting state), it never
exits that state.
S = {0, 1, 2, 3}
I = {a, b, c}
O = {true, false}
n(s, i) =
(cid:12)
o(s, i) =
s0 = 0
if s = 0, i = a
if s = 1, i = b
if s = 2, i = c
1
2
0
3 otherwise
false
true
if n(s, i) = 3
otherwise
Figure 4.1 shows a state transition diagram for this state machine. Each circle represents a state.
The arcs connecting the circles represent possible transitions the machine can make; the arcs are
labeled with a pair i, o, which means that if the machine is in the state denoted by a circle with
label s, and gets an input i, then the arc points to the next state, n(s, i) and the output generated
o(s, i) = o. Some arcs have several labels, indicating that there are many different inputs that will
cause the same transition. Arcs can only be traversed in the direction of the arrow.
For a state transition diagram to be complete, there must be an arrow emerging from each state
for each possible input i (if the next state is the same for some inputs, then we draw the graph
more compactly by using a single arrow with multiple labels, as you will see below).
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
120
Figure 4.1 State transition diagram for language acceptor.
We will use tables like the following one to examine the evolution of a state machine:
time | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
4.1 State transition diagram for language acceptor.
We will use tables like the following one to examine the evolution of a state machine:
time
input
state
output
0
i0
s0
o1
1
i1
s1
o2
2
i2
s2
o3
...
...
...
...
For each column in the table, given the current input value and state we can use the output
function o to determine the output in that column; and we use the n function applied to that
input and state value to determine the state in the next column. Thus, just knowing the input
sequence and s0, and the next-state and output functions of the machine will allow you to fill in
the rest of the table.
For example, here is the state of the machine at the initial time point:
time
input
state
output
0
i0
s0
1
2
...
...
...
...
0321c / Truea / Falseb / Falseb / Falsec / Falsea / Trueb / Truea / Falsec / Falsea / Falseb / Falsec / FalseChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
121
Using our knowledge of the next state function n, we have:
time
input
state
output
0
i0
s0
1
2
s1
...
...
...
...
and using our knowledge of the output function o, we have at the next input value
time
input
state
output
0
i0
s0
o1
2
1
i1
s1
...
...
...
...
This completes one cycle of the statement machine, and we can now repeat the process.
Here is a table showing what the language acceptor machine does with input sequence (’a’, ’b’,
’c’, ’a’, ’c’, ’a’, ’b’):
time
input
state
0
’a’
0
1
’b’
1
2
’c’
2
3
’a | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
time
input
state
0
’a’
0
1
’b’
1
2
’c’
2
3
’a’
0
4
’c’
1
5
’a’
3
6
’b’
3
7
3
output
True
True
True
True
False
False
False
The output sequence is (True, True, True, True, False, False, False).
Clearly we don’t want to analyze a system by considering all input sequences, but this table helps
us understand the state transitions of the system model.
To learn more:
Finite-state machine language acceptors can be built for a class of patterns
called regular languages. There are many more complex patterns (such as
the set of strings with equal numbers of 1’s and 0’s) that cannot be rec
ognized by finite-state machines, but can be recognized by a specialized
kind of infinite-state machine called a stack machine. To learn more about
these fundamental ideas in computability theory, start with the Wikipedia
article on Computability_theory_(computer_science)
4.1.1.2 Up and down counter
This machine can count up and down; its state space is the countably infinite set of integers. It
starts in state 0. Now, if it gets input u, it goes to state 1; if it gets u again, it goes to state 2. If it
gets d, it goes back down to 1, and so on. For this machine, the output is always the same as the
next state.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
122
S = integers
I = {u, d}
O = integers
(cid:12)
n(s, i) =
s + 1
s − 1
if i = u
if i = d
o(s, i) = n(s, i)
s0 = 0
Here is a table showing what the up and down counter does with input sequence u, u, u, d, | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
s0 = 0
Here is a table showing what the up and down counter does with input sequence u, u, u, d, d, u:
time
input
state
output
0
u
0
1
1
u
1
2
2
u
2
3
3
d
3
2
4
d
2
1
5
u
1
2
6
2
The output sequence is 1, 2, 3, 2, 1, 2.
4.1.1.3 Delay
An even simpler machine just takes the input and passes it through to the output, but with one
step of delay, so the kth element of the input sequence will be the k + 1st element of the output
sequence. Here is the formal machine definition:
S = anything
I = anything
O = anything
n(s, i) = i
o(s, i) = s
s0 = 0
Given an input sequence i0, i1, i2, . . ., this machine will produce an output sequence
0, i0, i1, i2, . . .. The initial 0 comes because it has to be able to produce an output before it
has even seen an input, and that output is produced based on the initial state, which is 0. This
very simple building block will come in handy for us later on.
Here is a table showing what the delay machine does with input sequence 3, 1, 2, 5, 9:
time
input
state
output
0
3
0
0
1
1
3
3
2
2
1
1
3
5
2
2
4
9
5
5
5
9
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
123
The output sequence is 0, 3, 1, 2, 5.
4.1.1.4 Accumulator
Here is a machine whose output is the | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
3, 1, 2, 5.
4.1.1.4 Accumulator
Here is a machine whose output is the sum of all the inputs it has ever seen.
S = numbers
I = numbers
O = numbers
n(s, i) = s + i
o(s, i) = n(s, i)
s0 = 0
Here is a table showing what the accumulator does with input sequence 100, −3, 4, −123, 10:
time
0
input
100
state
0
output
100
1
-3
100
97
2
4
97
101
3
4
5
-123
101
-22
10
-22
-12
-12
4.1.1.5 Average2
Here is a machine whose output is the average of the current input and the previous input. It
stores its previous input as its state.
S = numbers
I = numbers
O = numbers
n(s, i) = i
o(s, i) = (s + i)/2
s0 = 0
Here is a table showing what the average2 machine does with input sequence 100, −3, 4, −123, 10:
time
0
input
100
1
-3
state
output
0
50
100
48.5
2
4
-3
0.5
3
-123
4
10
5
4
-123
10
-59.5
-56.5
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
124
4.1.2 State machines in Python
Now, it is time to make computational implementations of state machine models. In this section
we will build up some basic Python infrastructure to make it straightforward to define primitive
machines; in later sections we will see how to combine primitive machines into more complex
structures.
We will use Python’s object-oriented facilities to make this convenient. We have an abstract class,
SM, which will be the superclass for all of the particular state machine classes we de� | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
-oriented facilities to make this convenient. We have an abstract class,
SM, which will be the superclass for all of the particular state machine classes we define. It does
not make sense to make an instance of SM, because it does not actually specify the behavior of
the machine; it just provides some utility methods. To specify a new type of state machine, you
define a new class that has SM as a superclass.
In any subclass of SM, it is crucial to define an attribute startState, which specifies the initial
state of the machine, and a method getNextValues which takes the state at time t and the input
at time t as input, and returns the state at time t + 1 and the output at time t. This is a choice
that we have made as designers of our state machine model; we will rely on these two pieces of
information in our underlying infrastructure for simulating state machines, as we will see shortly.
Here, for example, is the Python code for an accumulator state machine, which implements the
definition given in section 4.1.1.4. 29
class Accumulator(SM):
startState = 0
def getNextValues(self, state, inp):
return (state + inp, state + inp)
It is important to note that getNextValues does not change the state of the machine, in other
words, it does not mutate a state variable. Its job is to be a pure function: to answer the question
of what the next state and output would be if the current state were state and the current input
were inp. We will use the getNextValues methods of state machines later in the class to make
plans by considering alternative courses of action, so they must not have any side effects. As we
noted, this is our choice as designers of the state machine infrastructure We could have chosen to
implement things differently, however this choice will prove to be very useful. Thus, in all our
state machines, the function getNextValues will capture the transition from input and state to
output and state, without mutating any stored state values.
To run a state machine, we make an instance of the appropriate state-machine class, call its start
method (a built in method we will see shortly) | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
To run a state machine, we make an instance of the appropriate state-machine class, call its start
method (a built in method we will see shortly) to set the state to the starting state, and then ask
it to take steps; each step consists of generating an output value (which is printed) and updating
the state to the next state, based on the input. The abstract superclass SM defines the start and
step methods. These methods are in charge of actually initializing and then changing the state
of the machine as it is being executed. They do this by calling the getNextValues method for
the class to which this instance belongs. The start method takes no arguments (but, even so, we
have to put parentheses after it, indicating that we want to call the method, not to do something
29 Throughout this code, we use inp, instead of input, which would be clearer. The reason is that the name input is
used by Python as a function. Although it is legal to re-use it as the name of an argument to a procedure, doing so is a
source of bugs that are hard to find (if, for instance, you misspell the name input in the argument list, your references
to input later in the procedure will be legal, but will return the built-in function.)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
125
with the method itself); the step method takes one argument, which is the input to the machine
on the next step. So, here is a run of the accumulator, in which we feed it inputs 3, 4, and -2:
>>> a = Accumulator()
>>> a.start()
>>> a.step(3)
3
>>> a.step(4)
7
>>> a.step(-2)
5
The class SM specifies how state machines work in general; the class Accumulator specifies how
accumulator machines work in general; and the instance a is a particular machine with a particu
lar current state. We can make another instance of accumulator:
>>> b = Accumulator()
>>> b.start()
>>> b.step(10)
10
>>> b.state
10
>>> a.state
5
Now, we have | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
()
>>> b.step(10)
10
>>> b.state
10
>>> a.state
5
Now, we have two accumulators, a, and b, which remember, individually, in what states they
exist. Figure 4.2 shows the class and instance structures that will be in place after creating these
two accumulators.
Figure 4.2 Classes and instances for an Accumulator SM. All of the dots represent procedure
objects.
4.1.2.1 Defining a type of SM
Let’s go back to the definition of the Accumulator class, and study it piece by piece.
First, we define the class and say that it is a subclass of SM:
class Accumulator(SM):
baAccumulatorSMglobalrunstepstarttransducegetNextValuesstartState05state10stateChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
126
Next, we define an attribute of the class, startState, which is the starting state of the machine.
In this case, our accumulator starts up with the value 0.
startState = 0
Note that startState is required by the underlying SM class, so we must either define it in our
subclass definition or use a default value from the SM superclass.
The next method defines both the next-state function and the output function, by taking the cur
rent state and input as arguments and returning a tuple containing both the next state and the
output.
For our accumulator, the next state is just the sum of the previous state and the input; and the
output is the same thing:
def getNextValues(self, state, inp):
return (state + inp, state + inp)
It is crucial that getNextValues be a pure function; that is, that it not change the state of the
object (by assigning to any attributes of self). It must simply compute the necessary values and
return them. We do not promise anything about how many times this method will be called and
in what circumstances.
Sometimes, it is convenient to arrange it so that | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
We do not promise anything about how many times this method will be called and
in what circumstances.
Sometimes, it is convenient to arrange it so that the class really defines a range of machines with
slightly different behavior, which depends on some parameters that we set at the time we create
an instance. So, for example, if we wanted to specify the initial value for an accumulator at the
time the machine is created, we could add an __init__ method30 to our class, which takes an
initial value as a parameter and uses it to set an attribute called startState of the instance.31
class Accumulator(SM):
def __init__(self, initialValue):
self.startState = initialValue
def getNextValues(self, state, inp):
return state + inp, state + inp
Now we can make an accumulator and run it like this:
>>> c = Accumulator(100)
>>> c.start()
>>> c.step(20)
120
>>> c.step(2)
122
30 Remember that the __init__ method is a special feature of the Python object-oriented system, which is called when
ever an instance of the associated class is created.
31 Note that in the original version of Accumulator, startState was an attribute of the class, since it was the same for
every instance of the class; now that we want it to be different for different instances, we need startState to be an
attribute of the instance rather than the class, which is why we assign it in the __init__ method, which modifies the
already-created instance.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
127
4.1.2.2 The SM Class
The SM class contains generally useful methods that apply to all state machines. A state machine
is an instance of any subclass of SM, that has defined the attribute startState and the method
getNextValues, as we did for the Accumulator class. Here we examine these methods in more
detail.
Running a machine
The first group of methods allows us to run a state machine. To run a machine is to provide it with
a sequence of inputs and then sequentially go forward, computing the next | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
methods allows us to run a state machine. To run a machine is to provide it with
a sequence of inputs and then sequentially go forward, computing the next state and generating
the next output, as if we were filling in a state table.
To run a machine, we have to start by calling the start method. All it does is create an attribute
of the instance, called state, and assign to it the value of the startState attribute. It is crucial
that we have both of these attributes: if we were to just modify startState, then if we wanted
to run this machine again, we would have permanently forgotten what the starting state should
be. Note that state becomes a repository for the state of this instance; however we should not
mutate it directly. This variable becomes the internal representation of state for each instance of
this class.
class SM:
def start(self):
self.state = self.startState
Once it has started, we can ask it to take a step, using the step method, which, given an input,
computes the output and updates the internal state of the machine, and returns the output value.
def step(self, inp):
(s, o) = self.getNextValues(self.state, inp)
self.state = s
return o
To run a machine on a whole sequence of input values, we can use the transduce method, which
will return the sequence of output values that results from feeding the elements of the list inputs
into the machine in order.
def transduce(self, inputs):
self.start()
return [self.step(inp) for inp in inputs]
Here are the results of running transduce on our accumulator machine. We run it twice, first
with a simple call that does not generate any debugging information, and simply returns the
result. The second time, we ask it to be verbose, resulting in a print-out of what is happening on
the intermediate steps.32
32 In fact, the second machine trace, and all the others in this section were generated with a call like:
>>> a.transduce([100, -3, 4, -123, 10], verbose = True, compact = True) | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
, -3, 4, -123, 10], verbose = True, compact = True)
See the Infrastructure Guide for details on different debugging options. To simplify the code examples we show in
these notes, we have omitted parts of the code that are responsible for debugging printouts.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
128
a = Accumulator()
>>> a.transduce([100, -3, 4, -123, 10])
[100, 97, 101, -22, -12]
>>> a.transduce([100, -3, 4, -123, 10], verbose = True)
Start state: 0
In: 100 Out: 100 Next State: 100
In: -3 Out: 97 Next State: 97
In: 4 Out: 101 Next State: 101
In: -123 Out: -22 Next State: -22
In: 10 Out: -12 Next State: -12
[100, 97, 101, -22, -12]
Some machines do not take any inputs; in that case, we can simply call the SM run method, which
is equivalent to doing transduce on an input sequence of [None, None, None, ...].
def run(self, n = 10):
return self.transduce([None]*n)
4.1.2.2.1 Default methods
This section is optional.
In order to make the specifications for the simplest machine types as short as possible, we have
also provided a set of default methods in the SM class. These default methods say that, unless they
are overridden in a subclass, as they were when we defined Accumulator, we will assume that a
machine starts in state None and that its output is the same as its next state.
startState = None
def getNextValues(self, state, inp): | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
that its output is the same as its next state.
startState = None
def getNextValues(self, state, inp):
nextState = self.getNextState(state, inp)
return (nextState, nextState)
Because these methods are provided in SM, we can define, for example, a state machine whose out
put is always k times its input, with this simple class definition, which defines a getNextState
procedure that simply returns a value that is treated as both the next state and the output.
class Gain(SM):
def __init__(self, k):
self.k = k
def getNextState(self, state, inp):
return inp * self.k
We can use this class as follows:
>>> g = Gain(3)
>>> g.transduce([1.1, -2, 100, 5])
[3.3000000000000003, -6, 300, 15]
The parameter k is specified at the time the instance is created. Then, the output of the machine
is always just k times the input.
We can also use this strategy to write the Accumulator class even more succinctly:
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
129
class Accumulator(SM):
startState = 0
def getNextState(self, state, inp):
return state + inp
The output of the getNextState method will be treated both as the output and the next state of
the machine, because the inherited getNextValues function uses it to compute both values.
4.1.2.3 More examples
Here are Python versions of the rest of the machines we introduced in the first section.
Language acceptor
Here is a Python class for a machine that “accepts” the language that is any prefix of the infinite
sequence [’a’, ’b’, ’c’, ’a’, ’b’, ’c’, ....].
class ABC(SM):
startState = 0
def getNextValues(self, state, | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
’c’, ....].
class ABC(SM):
startState = 0
def getNextValues(self, state, inp):
if state == 0 and inp == ’a’:
return (1, True)
elif state == 1 and inp == ’b’:
return (2, True)
elif state == 2 and inp == ’c’:
return (0, True)
else:
return (3, False)
It behaves as we would expect. As soon as it sees a character that deviates from the desired
sequence, the output is False, and it will remain False for ever after.
>>> abc = ABC()
>>> abc.transduce([’a’,’a’,’a’], verbose = True)
Start state: 0
In: a Out: True Next State: 1
In: a Out: False Next State: 3
In: a Out: False Next State: 3
[True, False, False]
>>> abc.transduce([’a’, ’b’, ’c’, ’a’, ’c’, ’a’, ’b’], verbose = True)
Start state: 0
In: a Out: True Next State: 1
In: b Out: True Next State: 2
In: c Out: True Next State: 0
In: a Out: True Next State: 1
In: c Out: False Next State: 3
In: a Out: False Next State: 3
In: b Out: False Next State: 3
[True, True, True, True, False, False, False]
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
130
Count | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
4 State Machines
6.01— Spring 2011— April 25, 2011
130
Count up and down
This is a direct translation of the machine defined in section 4.1.1.2.
class UpDown(SM):
startState = 0
def getNextState(self, state, inp):
if inp == ’u’:
return state + 1
else:
return state - 1
We take advantage of the default getNextValues method to make the output the same as the
next state.
>>> ud = UpDown()
>>> ud.transduce([’u’,’u’,’u’,’d’,’d’,’u’], verbose = True)
Start state: 0
In: u Out: 1 Next State: 1
In: u Out: 2 Next State: 2
In: u Out: 3 Next State: 3
In: d Out: 2 Next State: 2
In: d Out: 1 Next State: 1
In: u Out: 2 Next State: 2
[1, 2, 3, 2, 1, 2]
Delay
In order to make a machine that delays its input stream by one time step, we have to specify what
the first output should be. We do this by passing the parameter, v0, into the __init__ method
of the Delay class. The state of a Delay machine is just the input from the previous step, and the
output is the state (which is, therefore, the input from the previous time step).
class Delay(SM):
def __init__(self, v0):
self.startState = v0
def getNextValues(self, state, inp):
return (inp, state)
>>> d = Delay(7)
>>> d.transduce([3, 1, 2, 5, 9], verbose = True | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
7)
>>> d.transduce([3, 1, 2, 5, 9], verbose = True)
Start state: 7
In: 3 Out: 7 Next State: 3
In: 1 Out: 3 Next State: 1
In: 2 Out: 1 Next State: 2
In: 5 Out: 2 Next State: 5
In: 9 Out: 5 Next State: 9
[7, 3, 1, 2, 5]
>>> d100 = Delay(100)
>>> d100.transduce([3, 1, 2, 5, 9], verbose = True)
Start state: 100
In: 3 Out: 100 Next State: 3
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
131
In: 1 Out: 3 Next State: 1
In: 2 Out: 1 Next State: 2
In: 5 Out: 2 Next State: 5
In: 9 Out: 5 Next State: 9
[100, 3, 1, 2, 5]
We will use this machine so frequently that we put its definition in the sm module (file), along
with the class.
R
We can use R as another name for the Delay class of state machines. It will be an important
primitive in a compositional system of linear time-invariant systems, which we explore in the
next chapter.
Average2
Here is a state machine whose output at time t is the average of the input values from times t − 1
and t.
class Average2(SM | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
state machine whose output at time t is the average of the input values from times t − 1
and t.
class Average2(SM):
startState = 0
def getNextValues(self, state, inp):
return (inp, (inp + state) / 2.0)
It needs to remember the previous input, so the next state is equal to the input. The output is the
average of the current input and the state (because the state is the previous input).
>>> a2 = Average2()
>>> a2.transduce([10, 5, 2, 10], verbose = True, compact = True)
Start state: 0
In: 10 Out: 5.0 Next State: 10
In: 5 Out: 7.5 Next State: 5
In: 2 Out: 3.5 Next State: 2
In: 10 Out: 6.0 Next State: 10
[5.0, 7.5, 3.5, 6.0]
Sum of last three inputs
Here is an example of a state machine where the state is actually a list of values. Generally speak
ing, the state can be anything (a dictionary, an array, a list); but it is important to be sure that
the getNextValues method does not make direct changes to components of the state, instead
returning a new copy of the state with appropriate changes. We may make several calls to the
getNextValues function on one step (or, later in our work, call the getNextValues function
with several different inputs to see what would happen under different choices); these function
calls are made to find out a value of the next state, but if they actually change the state, then the
same call with the same arguments may return a different value the next time.
This machine generates as output at time t the sum of it−2, it−1 and it; that is, of the last three
inputs. In order to do this, | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
the sum of it−2, it−1 and it; that is, of the last three
inputs. In order to do this, it has to remember the values of two previous inputs; so the state is
a pair of numbers. We have defined it so that the initial state is (0, 0). The getNextValues
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
132
method gets rid of the oldest value that it has been remembering, and remembers the current
input as part of the state; the output is the sum of the current input with the two old inputs that
are stored in the state. Note that the first line of the getNextValues procedure is a structured
assignment (see section 3.3).
class SumLast3 (SM):
startState = (0, 0)
def getNextValues(self, state, inp):
(previousPreviousInput, previousInput) = state
return ((previousInput, inp),
previousPreviousInput + previousInput + inp)
>>> sl3 = SumLast3()
>>> sl3.transduce([2, 1, 3, 4, 10, 1, 2, 1, 5], verbose = True)
Start state: (0, 0)
In: 2 Out: 2 Next State: (0, 2)
In: 1 Out: 3 Next State: (2, 1)
In: 3 Out: 6 Next State: (1, 3)
In: 4 Out: 8 Next State: (3, 4)
In: 10 Out: 17 Next State: (4, 10)
In: 1 Out: 15 Next State: (10, 1)
In: 2 Out: 13 Next State: (1, 2)
In | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
(10, 1)
In: 2 Out: 13 Next State: (1, 2)
In: 1 Out: 4 Next State: (2, 1)
In: 5 Out: 8 Next State: (1, 5)
[2, 3, 6, 8, 17, 15, 13, 4, 8]
Selector
A simple functional machine that is very useful is the Select machine. You can make many
different versions of this, but the simplest one takes an input that is a stream of lists or tuples of
several values (or structures of values) and generates the stream made up only of the kth elements
of the input values. Which particular component this machine is going to select is determined by
the value k, which is passed in at the time the machine instance is initialized.
class Select (SM):
def __init__(self, k):
self.k = k
def getNextState(self, state, inp):
return inp[self.k]
4.1.3 Simple parking gate controller
As one more demonstration, here is a simple example of a finite-state controller for a gate leading
out of a parking lot.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
133
The gate has three sensors:
• gatePosition has one of three values ’top’, ’middle’, ’bottom’, signi
fying the position of the arm of the parking gate.
• carAtGate is True if a car is waiting to come through the gate and False
otherwise.
• carJustExited is True if a car has just passed through the gate; it is true
for only one step before resetting to False.
Image by MIT OpenCourseWare.
Pause to try 4.1. How many possible inputs are there?
Pause to try 4.1. How many possible inputs are there?
A: 12.
A: 12.
The gate has three possible outputs (think of | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
are there?
A: 12.
A: 12.
The gate has three possible outputs (think of them as controls to the motor for the gate arm):
’raise’, ’lower’, and ’nop’. (Nop means “no operation.”)
Roughly, here is what the gate needs to do:
•
If a car wants to come through, the gate needs to raise the arm until it is at the top position.
• Once the gate is at the top position, it has to stay there until the car has driven through the
gate.
• After the car has driven through the gate needs to lower the arm until it reaches the bottom
position.
So, we have designed a simple finite-state controller with a state transition diagram as shown
in figure 4.3. The machine has four possible states: ’waiting’ (for a car to arrive at the gate),
’raising’ (the arm), ’raised’ (the arm is at the top position and we’re waiting for the car to
drive through the gate), and ’lowering’ (the arm). To keep the figure from being too cluttered,
we do not label each arc with every possible input that would cause that transition to be made:
instead, we give a condition (think of it as a Boolean expression) that, if true, would cause the
transition to be followed. The conditions on the arcs leading out of each state cover all the possible
inputs, so the machine remains completely well specified.
Figure 4.3 State transition diagram for parking gate controller.
raisingloweringraisedwaitingnot top / raisecarAtGate / raisetop / nopcarJustExited / lowerbottom / nopnot bottom / lowernot carJustExited / nopnot carAtGate / nopChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
134
Here is a simple state machine that implements the controller for the parking gate. For com
pactness, the getNextValues method starts by determining the next | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
parking gate. For com
pactness, the getNextValues method starts by determining the next state of the gate. Then,
depending on the next state, the generateOutput method selects the appropriate output.
class SimpleParkingGate (SM):
startState = ’waiting’
def generateOutput(self, state):
if state == ’raising’:
return ’raise’
elif state == ’lowering’:
return ’lower’
else:
return ’nop’
def getNextValues(self, state, inp):
(gatePosition, carAtGate, carJustExited) = inp
if state == ’waiting’ and carAtGate:
nextState = ’raising’
elif state == ’raising’ and gatePosition == ’top’:
nextState = ’raised’
elif state == ’raised’ and carJustExited:
nextState = ’lowering’
elif state == ’lowering’ and gatePosition == ’bottom’:
nextState = ’waiting’
else:
nextState = state
return (nextState, self.generateOutput(nextState))
In the situations where the state does not change (that is, when the arcs lead back to the same
state in the diagram), we do not explicitly specify the next state in the code: instead, we cover it in
the else clause, saying that unless otherwise specified, the state stays the same. So, for example,
if the state is raising but the gatePosition is not yet top, then the state simply stays raising
until the top is reached.
>>> spg = SimpleParkingGate()
>>> spg.transduce(testInput, verbose = True)
Start state: waiting
In: (’bottom’, False, False) Out: nop Next State: waiting
In: (’bottom’, True, False) Out: raise Next State: raising
In: (’bottom’, True, False) Out: raise Next State: raising | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
State: raising
In: (’bottom’, True, False) Out: raise Next State: raising
In: (’middle’, True, False) Out: raise Next State: raising
In: (’middle’, True, False) Out: raise Next State: raising
In: (’middle’, True, False) Out: raise Next State: raising
In: (’top’, True, False) Out: nop Next State: raised
In: (’top’, True, False) Out: nop Next State: raised
In: (’top’, True, False) Out: nop Next State: raised
In: (’top’, True, True) Out: lower Next State: lowering
In: (’top’, True, True) Out: lower Next State: lowering
In: (’top’, True, False) Out: lower Next State: lowering
In: (’middle’, True, False) Out: lower Next State: lowering
In: (’middle’, True, False) Out: lower Next State: lowering
In: (’middle’, True, False) Out: lower Next State: lowering
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
135
In: (’bottom’, True, False) Out: nop Next State: waiting
In: (’bottom’, True, False) Out: raise Next State: raising
[’nop’, ’raise’, ’raise’, ’raise’, ’raise’, ’raise’, ’nop’, ’nop’, ’nop’, ’lower’, ’lower’,
’lower’, ’lower’, ’lower’, ’lower’, ’nop’, ’raise’]
Exercise
4.1.
What would the code for this machine look | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
’, ’lower’, ’lower’, ’nop’, ’raise’]
Exercise
4.1.
What would the code for this machine look like if it were written without
using the generateOutput method?
4.2 Basic combination and abstraction of state machines
In the previous section, we studied the definition of a primitive state machine, and saw a number
of examples. State machines are useful for a wide variety of problems, but specifying complex
machines by explicitly writing out their state transition functions can be quite tedious. Ultimately,
we will want to build large state-machine descriptions compositionally, by specifying primitive
machines and then combining them into more complex systems. We will start here by looking at
ways of combining state machines.
We can apply our PCAP (primitive, combination, abstraction, pattern) methodology, to build
more complex SMs out of simpler ones. In the rest of this section we consider “dataflow” com
positions, where inputs and outputs of primitive machines are connected together; after that, we
consider “conditional” compositions that make use of different sub-machines depending on the
input to the machine, and finally “sequential” compositions that run one machine after another.
4.2.1 Cascade composition
In cascade composition, we take two machines and use the output of the first one as the input
to the second, as shown in figure 4.4. The result is a new composite machine, whose input vo
cabulary is the input vocabulary of the first machine and whose output vocabulary is the output
vocabulary of the second machine. It is, of course, crucial that the output vocabulary of the first
machine be the same as the input vocabulary of the second machine.
Figure 4.4 Cascade composition of state machines
Recalling the Delay machine from the previous section, let’s see what happens if we make the
cascade composition of two delay machines. Let m1 be a delay machine with initial value 99
and m2 be a delay machine with initial value 22. Then Cascade(m1, m2) is a new state machine,
constructed by making the output of m1 be the input of m2. Now | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
Then Cascade(m1, m2) is a new state machine,
constructed by making the output of m1 be the input of m2. Now, imagine we feed a sequence of
values, 3, 8, 2, 4, 6, 5, into the composite machine, m. What will come out? Let’s try to understand
this by making a table of the states and values at different times:
m1m2o1 = i2i1o2Cascade(m1, m2)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
136
time
m1 input
m1 state
m1 output
m2 input
m2 state
m2 output
0
3
99
99
99
22
22
1
8
3
3
3
99
99
2
2
8
8
8
3
3
3
4
2
2
2
8
8
4
6
4
4
4
2
2
5
5
6
6
6
4
4
6
5
6
The output sequence is 22, 99, 3, 8, 2, 4, which is the input sequence, delayed by two time steps.
Another way to think about cascade composition is as follows. Let the input to m1 at time t be
called i1[t] and the output of m1 at time t be called o1[t]. Then, we can describe the workings of
the delay machine in terms of an equation:
o1[t] = i1[t − 1] for all values of t > 0;
o1[0] = init1
that is, that the output value at every time t is equal to the input value at the previous time step.
You can see that in the table above. The same relation holds for the input and output of m2:
o2[t] = i2[t − 1] for all values of t > 0.
o2[0] = init2
Now, since we have connected the output of m1 to the input of m2, we also have that i | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
[0] = init2
Now, since we have connected the output of m1 to the input of m2, we also have that i2[t] = o1[t]
for all values of t. This lets us make the following derivation:
o2[t] = i2[t − 1]
= o1[t − 1]
= i1[t − 2]
This makes it clear that we have built a “delay by two” machine, by cascading two single delay
machines.
As with all of our systems of combination, we will be able to form the cascade composition not
only of two primitive machines, but of any two machines that we can make, through any set of
compositions of primitive machines.
Here is the Python code for an Increment machine. It is a pure function whose output at time t
is just the input at time t plus the constant incr. The safeAdd function is the same as addition, if
the inputs are numbers. We will see, later, why it is important.
class Increment(SM):
def __init__(self, incr):
self.incr = incr
def getNextState(self, state, inp):
return safeAdd(inp, self.incr)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
137
Exercise 4.2.
Derive what happens when you cascade two delay-by-two machines?
Exercise 4.3.
What is the difference between these two machines?
>>> foo1 = sm.Cascade(sm.Delay(100), Increment(1))
>>> foo2 = sm.Cascade(Increment(1), sm.Delay(100))
Demonstrate by drawing a table of their inputs, states, and outputs, over
time.
4.2.2 Parallel composition
In parallel composition, we take two machines and run them “side by side”. They both take the
same input, and the output of the composite machine is the pair of outputs of the individual
machines. The result is a new composite machine, whose input vocabulary is the same as the
input vocabulary of the component machines | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
the individual
machines. The result is a new composite machine, whose input vocabulary is the same as the
input vocabulary of the component machines (which is the same for both machines) and whose
output vocabulary is pairs of elements, the first from the output vocabulary of the first machine
and the second from the output vocabulary of the second machine. Figure 4.5 shows two types
of parallel composition; in this section we are talking about the first type.
Figure 4.5 Parallel and Parallel2 compositions of state machines.
In Python, we can define a new class of state machines, called Parallel, which is a subclass of
SM. To make an instance of Parallel, we pass two SMs of any type into the initializer. The state of
the parallel machine is a pair consisting of the states of the constituent machines. So, the starting
state is the pair of the starting states of the constituents.
class Parallel (SM):
def __init__(self, sm1, sm2):
self.m1 = sm1
self.m2 = sm2
self.startState = (sm1.startState, sm2.startState)
m1m2io2Parallel(m1, m2)o1m1m2i1o2Parallel2(m1, m2)o1i2Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
138
To get a new state of the composite machine, we just have to get new states for each of the con
stituents, and return the pair of them; similarly for the outputs.
def getNextValues(self, state, inp):
(s1, s2) = state
(newS1, o1) = self.m1.getNextValues(s1, inp)
(newS2, o2) = self.m2.getNextValues(s2, inp)
return ((newS1, newS2), (o1, o2))
Parallel2
Sometimes we will want a variant on parallel combination, in which rather than having the input
be a single item which is fed to both machines, the input is a pair of items, the first of which is
fed | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
having the input
be a single item which is fed to both machines, the input is a pair of items, the first of which is
fed to the first machine and the second to the second machine. This composition is shown in the
second part of figure 4.5.
Here is a Python class that implements this two-input parallel composition. It can inherit the
__init__ method from Parallel, so use Parallel as the superclass, and we only have to define
two methods.
class Parallel2 (Parallel):
def getNextValues(self, state, inp):
(s1, s2) = state
(i1, i2) = splitValue(inp)
(newS1, o1) = self.m1.getNextValues(s1, i1)
(newS2, o2) = self.m2.getNextValues(s2, i2)
return ((newS1, newS2), (o1, o2))
Later, when dealing with feedback systems ( section section 4.2.3), we will need to be able to deal
with ’undefined’ as an input. If the Parallel2 machine gets an input of ’undefined’, then
we want to pass ’undefined’ into the constituent machines. We make our code more beautiful
by defining the helper function below, which is guaranteed to return a pair, if its argument is
either a pair or ’undefined’. 33
def splitValue(v):
if v == ’undefined’:
return (’undefined’, ’undefined’)
else:
return v
ParallelAdd
The ParallelAdd state machine combination is just like Parallel, except that it has a single
output whose value is the sum of the outputs of the constituent machines. It is straightforward to
define:
33 We are trying to make the code examples we show here as simple and clear as possible; if we were writing code for
actual deployment, we would check and generate error messages for all sorts of potential problems (in this case, for
instance, if v is neither None nor a two-element | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
, we would check and generate error messages for all sorts of potential problems (in this case, for
instance, if v is neither None nor a two-element list or tuple.)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
139
class ParallelAdd (Parallel):
def getNextValues(self, state, inp):
(s1, s2) = state
(newS1, o1) = self.m1.getNextValues(s1, inp)
(newS2, o2) = self.m2.getNextValues(s2, inp)
return ((newS1, newS2), o1 + o2)
4.2.3 Feedback composition
Figure 4.6 Two forms of feedback composition.
Another important means of combination that we will use frequently is the feedback combinator,
in which the output of a machine is fed back to be the input of the same machine at the next step,
as shown in figure 4.6. The first value that is fed back is the output associated with the initial state
of the machine on which we are operating. It is crucial that the input and output vocabularies of
the machine are the same (because the output at step t will be the input at step t + 1). Because
we have fed the output back to the input, this machine does not consume any inputs; but we will
treat the feedback value as an output of this machine.
Here is an example of using feedback to make a machine that counts. We can start with a simple
machine, an incrementer, that takes a number as input and returns that same number plus 1 as
the output. By itself, it has no memory. Here is its formal description:
S = numbers
I = numbers
O = numbers
n(s, i) = i + 1
o(s, i) = n(s, i)
s0 = 0
What would happen if we performed the feedback operation on this machine? We can try to
understand this in terms of the input/output equations | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.