text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
, we can connect the Matrix components with the functions of the imaging
elements:
𝜕𝑥𝑜𝑢𝑡
𝜕𝑥𝑖𝑛
𝜕𝜃𝑜𝑢𝑡
𝜕𝜃𝑖𝑛
𝜕𝑥𝑜𝑢𝑡
𝜕𝜃𝑖𝑛
𝜕𝜃𝑜𝑢𝑡
𝜕𝑥𝑖𝑛
) : mapping position to angles(momentum) (also function of a prism).
) : mapping angles(momentum) to position (function of a prism);
) : angular magnification;
𝐷 = (
𝐵 = (
𝐶 = (
For cascaded elements, we simply multiply ray matrices. (please notice the order of
matrices starts from left to right on optical axis!!)
Significance of the matrix elements: (Pedrotti Figure 18.9)
4
ininoutininoutoutxxxxxininoutininoutoutxxO1O3O2 𝑜𝑢𝑡𝜃𝑜𝑢𝑡 𝑖𝑛𝜃𝑖𝑛 𝑜𝑢𝑡𝜃𝑜𝑢𝑡= 2 1 𝑖𝑛𝜃𝑖𝑛
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
© Pearson Prentice Hall. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see http://ocw.mit.edu/fairuse.
(a) If the input surface is at the front focal plane, the outgoing ray angles depend
only on the incident height.
(b) Similarly, if the output | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
If the input surface is at the front focal plane, the outgoing ray angles depend
only on the incident height.
(b) Similarly, if the output surface is at the back focal plane, the outgoing ray heights
depend only on the incoming angles.
(c) If the input and output plane are conjugate, then all incoming rays from constant
height y0 will converge at a constant height regardless of their angle.
(d) When the system is “afocal”, the refracting angles of the outgoing beams are
independent of the input positions.
Example 1: refraction matrix from a spherical interface (only changes but not x)
Right at the interface,
𝑖𝑛 = 𝑜𝑢𝑡
𝑛1(𝜃𝑖𝑛 + 𝑖𝑛 𝑅⁄ ) ≈ 𝑛2(𝜃𝑜𝑢𝑡 + 𝑖𝑛 𝑅⁄ )
𝜃𝑜𝑢𝑡 ≈ (
𝑛1
𝑛2
[(
) 𝜃𝑖𝑛 +
𝑛1
𝑛2
So we can write the matrix:
) − 1]
𝑅
𝑖𝑛
5
n1n2xin12insRzouts
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Example 2: matrix of a ray propagating in a medium (changes x but not )
Example 3: refraction matrix through a thin lens (combined refraction)
Example 4: Imaging matrix through a thick lens (combined refraction and
translation)
From left to right:
-
Translation O1:
-
Refraction O2:
[
1 𝑠𝑜1
1
0
]
1
) − 1]
[(
[
𝑛
𝑛′
𝑅1
0
𝑛
𝑛′
(
]
)
- Translation O3:
- Refraction O4:
[
1 𝑑
0 1
]
1
0
[(
[
) − 1]
𝑛′ | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
4:
[
1 𝑑
0 1
]
1
0
[(
[
) − 1]
𝑛′
𝑛
−𝑅2
]
(
𝑛′
𝑛
)
6
xin , inz= 0xout , outzR1R2dso1-|si1|si2n'nnoutininoutinxxz21211010(1)/(1/1)/1/thinlenscurvedcurvedinterfaceinterfaceOOOnRnnRn
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
- Translation O5:
[
1 𝑠𝑖2
1
0
]
D. Location of Principal Planes for an Optical System
A ray matrix of the optical system (composite lenses and other elements) can give
us a complete description of the rays passing through the overall optical train. In
this session, we show that the focusing properties of the composite lens, such as
the principal planes.
In order to facilitate our analysis, we choose the input plane to be the front surface
of the lens arrays, and the output plane to be the back surface of the lenses.
(Adapted from Pedrotti Figure 18‐12)
Let’s start with the process of focusing at back focus first. In this case, an incoming
) is refracted from the 2nd principal plane (PP) so it passes through
parallel ray(
0
0
the back focal point (BF). At the output plane, the ray vector of the refracted ray
reads(
𝑓
−𝜃𝑓
).
𝑓
−𝜃𝑓
(
) = [
𝐴 𝐵
𝐶 𝐷
] (
0
0 | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
𝑓
).
𝑓
−𝜃𝑓
(
) = [
𝐴 𝐵
𝐶 𝐷
] (
0
0
)
This gives 𝑓 = 𝐴 0 and −𝜃𝑓 = 𝐶 0
.
7
dBFLx0Input Plane Output Plane 2ndPPxfEFL-f-f
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Using the small angle approximation, we can connect the ratio of beam height 0 and
the effective focal length (EFL) by the steering angle 𝜃𝑓:
𝜃𝑓 = 0/𝐸𝐹𝐿
EFL = - 1/C.
Thus
Also from the similar triangles,
We can find BFL:
𝑓/ 0 = 𝐵𝐹𝐿/𝐸𝐹𝐿.
𝐵𝐹𝐿 = −𝐴/𝐶.
Thus the 2nd PP is located at a distance from the output plane given by:
𝐵𝐹𝐿 − 𝐸𝐹𝐿 = −(𝐴 − 1)/𝐶.
Likewise, we can find FFL and the first principal plane by the matrix components.
′0
(
0
) = [
𝐴 𝐵
𝐶 𝐷
] (
− ′𝑓
−𝜃′𝑓
)
8
dFFLx0Input Plane Output Plane 1stPPx’fEFL-’f-’f
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
You could consider this as an inverse problem of the previous example, or solve the
relationship:
′0 = −𝐴 ′𝑓 − 𝐵𝜃′𝑓
0 = −𝐶 ′𝑓 − 𝐷𝜃′𝑓
𝜃′𝑓 = ′0/𝐸𝐹𝐿 and 𝜃′𝑓 = ′𝑓/𝐹𝐹𝐿
So how is the ray matrix experimentally determined | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
′0/𝐸𝐹𝐿 and 𝜃′𝑓 = ′𝑓/𝐹𝐹𝐿
So how is the ray matrix experimentally determined by ray tracing?
Generally, for a given (2D) optical system with unknown details, one way to
determine the transfer matrix is to take measurement of two arbitrary input and
output rays. To elaborate that idea, we can treat a pair of the input ray vectors as a
2x2 matrix:
(
1
𝑜𝑢𝑡
1
𝜃𝑜𝑢𝑡
2
𝑜𝑢𝑡
2 ) = [
𝜃𝑜𝑢𝑡
𝐴 𝐵
𝐶 𝐷
] (
1
𝑖𝑛
1
𝜃𝑖𝑛
2
𝑖𝑛
2 )
𝜃𝑖𝑛
Therefore
𝐴 𝐵
[
𝐶 𝐷
] =
𝐴 𝐵
[
𝐶 𝐷
] = (
1
𝑜𝑢𝑡
1
𝜃𝑜𝑢𝑡
2
𝑜𝑢𝑡
2 ) (
𝜃𝑜𝑢𝑡
1
𝑖𝑛
1
𝜃𝑖𝑛
−1
2
𝑖𝑛
2 )
𝜃𝑖𝑛
1
2 − 𝑖𝑛
1 )
2 𝜃𝑖𝑛
( 𝑖𝑛
1 𝜃𝑖𝑛
(
1 𝜃𝑖𝑛
𝑜𝑢𝑡
1 𝜃𝑖𝑛
𝜃𝑜𝑢𝑡
2 − 𝑜𝑢𝑡
2 − 𝜃𝑜𝑢𝑡
1
2 𝜃𝑖𝑛
1
2 𝜃𝑖𝑛
2 𝑖𝑛
𝑜𝑢𝑡
2 𝑖𝑛
𝜃𝑜𝑢𝑡
1 − 𝑜𝑢𝑡
1 − 𝜃𝑜𝑢𝑡
2
1 𝑖𝑛
2 )
1 𝑖𝑛
As a special case you may select the two rays to be marginal and chief rays as
defined in the following section.
9
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Practice Example: Rays Going Through 2F/4 | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
4)
2.71/2.710 Introduction to Optics –Nick Fang
Practice Example: Rays Going Through 2F/4F Lens system
Please determine the ray transfer matrix of the following lens elements, with
their input and output planes located at the front and back focal point of the
corresponding lens.
10
ff
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
E. Aperture Stops, Pupils and Windows
o The Aperture Stops and Numerical Aperture
o Numerical Aperture(NA):
-
- also defines the resolution (or resolving power) of the optical system
limits the optical flux that is admitted through the system;
o The concept of marginal rays and chief rays
- Marginal ray: the ray that passes through the edge of the aperture.
- Chief ray (also called principal rays): the ray from an object point that
passes through the axial point of the aperture stop (also appears as
emitting from the axis of exit pupil).
Together, the C.R. and M.R. define the angular acceptance of spherical ray
bundles originating from an off-axis object.
o The entrance and exit pupils
11
multi-elementoptical systemaperturestopimage throughpreceding elementsimage throughsucceeding elements
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
o The field stop and corresponding windows
o Field stop:
- Limits the angular acceptance of Chief Rays
- Defines the Field of View
- Proper FS should be at intermediate image plane
o Entrance & Exit Windows
12
image throughpreceding elementsimage throughsucceeding elementsfieldstop
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
o Effect of Aperture and field stops
13
NAentrancepupilap | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
2.710 Introduction to Optics –Nick Fang
o Effect of Aperture and field stops
13
NAentrancepupilaperturestopexitpupilFoVentrancewindowexitwindowfieldstop(momentum)x(location)Effect of Apertures and stops‘(momentum)X’(location)123123aperturesField stops
Lecture Notes on Geometrical Optics (02/18/14)
2.71/2.710 Introduction to Optics –Nick Fang
Practice Example: Single lens camera:
© Pearson Prentice Hall. All rights reserved. This content is excluded from our Creative
Commons license. For more information, see http://ocw.mit.edu/fairuse.
-
-
-
Please determine the position and size of the image.
Please determine the entrance and exit pupils.
Please sketch the chief ray and marginal rays from the top of the object to
the image.
14
MIT OpenCourseWare
http://ocw.mit.edu
(cid:21)(cid:17)(cid:26)(cid:20)(cid:3)(cid:18)(cid:3)(cid:21)(cid:17)(cid:26)(cid:20)(cid:19)(cid:3)(cid:50)(cid:83)(cid:87)(cid:76)(cid:70)(cid:86)
Spring 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/2-71-optics-spring-2014/04d4bff4ff4a9ae5e2c8ba7500905557_MIT2_71S14_lec4_notes.pdf |
Day 3
Hashing, Collections,
and Comparators
Wed. January 25th 2006
Scott Ostler
Hashing
Yesterday we overrode .equals()
Today we override .hashCode()
Goal: understand why we need to, and how
to do it
What is a Hash?
An integer that “stands in” for an object
Quick way to check for inequality, construct
groupings
Equal things (should) have equal hashs
What is .hashCode()
Well known method name that returns int
Is defined in java.lang.Object to return a
value mostly unique to that instance
All classes either inherit it, or override it
hashCode Object Contract
An object’s hashcode cannot change until it
is no longer equal to what it was
Two equal objects must have an equal
hashCode
It is good if two unequal objects have distinct
hashes
Hashcode Examples
String scott = "Scotty";
String scott2 = “Scotty”;
String corey = "Corey";
System.out.println(scott.hashCode());
System.out.println(scott2.hashCode());
System.out.println(corey.hashCode());
=> -1823897190, -1823897190, 65295514
Integer int1 = 123456789;
Integer int2 = 123456789;
System.out.println(int1.hashCode());
System.out.println(int2.hashCode());
=> 123456789, 123456789
A Name Class with equals()
public class Name {
public String first;
public String last;
public Name(String first, String last) {
this.first = first;
this.last = last;
}
public String toString() {
return first + " " + last;
}
public boolean equals(Object o) {
return (o instanceof Name &&
((Name) o).first.equals(this.first) &&
((Name) o).last.equals(this.last));
} | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
{
return (o instanceof Name &&
((Name) o).first.equals(this.first) &&
((Name) o).last.equals(this.last));
}}
Do our Names work?
Name kyle = new Name(“Kyle”, “MacLaughlin”);
Name jack = new Name("Jack", "Nance");
Name jack2 = new Name("Jack", "Nance");
System.out.println(kyle.equals(jack));
System.out.println(jack.equals(jack2));
System.out.println(kyle.hashCode());
System.out.println(jack.hashCode());
System.out.println(jack2.hashCode());
⇒ false, true, 6718604, 7122755, 14718739
Objects are equal, hashCodes aren’t
Who cares about hashCode?
Name code seems to work
Is this really a problem?
If we don’t use hashCode(), why bother
writing it?
ANSWER: JAVA CARES!
We have violated the Object contract
We have embarked upon a path filled with
Bad, Strange Things
Bad, Strange Thing #1
Set<String> strings = new HashSet<String>();
Set<Name> names = new HashSet<Name>()
strings.add(“jack”);
names.add(new Name(“Jack”, “Nance”));
System.out.println(strings.contains(“jack”));
System.out.println(names.contains(
new Name(“Jack”, “Nance”));
=> true, false
Solution? make .hashCode()
Remember our requirements:
hashCode() must obey equality
hashCode() must be consistent
hashCode() must generate int
hashCode() should recognize inequality
Possible Implementation
public class Name {
…
public int hashCode() {
return first.hashCode()
+ last.hashCode();
}
}
Does this work?
Good, Normal Thing #1
Set<Name> names = new HashSet<Name>()
names.add(jack);
System.out.println(names.contains(
new Name(“Jack”, “Nance”));
⇒ true
Could it be better? | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
System.out.println(names.contains(
new Name(“Jack”, “Nance”));
⇒ true
Could it be better?
A Better Implementation
public class Name {
…
public int hashCode() {
return first.hashCode() * 37
+ last.hashCode();
}
}
Why is it better? (remember contract)
hashCode Object Contract
An object’s hashcode cannot change until it
is no longer equal to what it was
Two equal objects must have an equal
hashCode
It is good if two unequal objects have distinct
hashes
Ex: Jack Nance will be different from Nance Jack
Before We Switch Topics
Any questions about hashCode, please ask!
It will be an important point later today
It will cause bizarre problems if you don’t
understand it
What Collections Do
“Framework” of Interfaces and Classes to
handle:
Collecting objects
Storing objects
Sorting objects
Retrieving objects
Provides common syntax across variety of
different Collection implementations
How to use Collections
add import java.util.*; to the top of every java
file
package lab2;
import java.util.*;
public class CollectionUser {
List<String> list = new ArrayList<String>();
… //rest of class
}
Basic Collection<Foo> Syntax
boolean add(Foo o);
boolean contains(Object o);
boolean remove(Foo o);
int size();
Example Usage
List<Name> iapjava = new ArrayList<Name>();
iapjava.add(new Name(“Laura”, “Dern”);
iapjava.add(new Name(“Toby”, “Keeler”);
System.out.println(iapjava.size()); => 2
iapjava.remove(new Name(“Toby”, “Keeler”);
System.out.println(iapjava.size()); => 1
List<Name> iapruby = new ArrayList<Name>();
Iapruby.add(new Name(“Scott”, “ | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
java.size()); => 1
List<Name> iapruby = new ArrayList<Name>();
Iapruby.add(new Name(“Scott”, “Ostler”));
iapjava.addAll(iapruby);
System.out.println(iapjava.size()); => 2
Generic Collections
We can specify the type of object that a
collection will hold
Ex: List<String> strings
We are reasonably sure that strings contains
only String objects
Is optional, but very useful
Why Use Generics?
List untyped = new ArrayList();
List<String> typed = new ArrayList<String>();
Object obj = untyped.get(0);
String sillyString = (String) obj;
String smartString = typed.get(0);
Retrieving objects
Given Collection<Foo> coll
Iterator:
Iterator<Foo> it = coll.iterator();
while (it.hasNext) {
Foo obj = it.next();
// do something with obj
}
For each:
for (Foo obj : coll) {
// do something with obj
}
Object Removing Caveat
Can’t remove objects from a Collection while
iterating over it
for (Foo obj : coll)
coll.remove(obj) // ConcurrentModificationException
}
Only the Iterator can remove an object it’s iterating over
Iterator<Foo> it = coll.iterator();
while (it.hasNext) {
Foo obj = it.next();
it.remove(Obj); // NOT coll.remove(Obj);
}
Note that iter.remove is optional, and not all Iterator objects
will support it
General Collection Types
List
ArrayList
Set
HashSet
TreeSet
Map
HashMap
List Overview
Ordered list of objects, similar to Array
Unlike Array, no set size
List order generally equals insert order
List<String> strings = new ArrayList<String>();
strings.add(“one”);
strings.add(“two”);
strings.add(“three”);
// strings = [ “one”, “two”, “ | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
strings.add(“one”);
strings.add(“two”);
strings.add(“three”);
// strings = [ “one”, “two”, “three”]
Other Ways
Insert at an index
List<String> strings = new ArrayList<String>();
strings.add(“one”);
strings.add(“three”);
strings.add(1, “two”);
// strings = [ “one”, “two”, “three”]
Retrieve objects with an index:
s.o.print(strings.get(0))
s.o.print(strings.indexOf(“one”))
// => “one”
// => 0
Set Overview
No set size, no set order
No duplicate objects allowed!
Set<Name> names = new HashSet<Name>();
names.add(new Name(“Jack”, “Nance”));
names.add(new Name(“Jack”, “Nance”));
System.out.println(names.size()); => 1
Set Contract
A set element cannot be changed in a way
that affects its equality
This is a danger of object mutability
If you don’t obey the contract, prepare for
Bad, Strange Things
Bad, Strange Thing #2
Set<Name> names = new HashSet<Name>();
Name jack = new Name(“Jack”, “Nance”);
names.add(jack);
System.out.println(names.size());
System.out.println(names.contains(jack)); => true;
jack.last = "Vance";
System.out.println(names.contains(jack)); => false
System.out.println(names.size()); => 1
Solutions to the Problem?
None.
So don’t do it.
If at all possible, use immutable set elements
Otherwise, be careful
Map Overview
Mapping between a set of “Key-Value Pairs”
That is, for every Key object, there is a Value
object
Essentially a “lookup service”
Keys must be unique, but values don’t have
to be
Note: Map is not a Collection
| https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
”
Keys must be unique, but values don’t have
to be
Note: Map is not a Collection
Map doesn’t support:
boolean add(Foo obj);
boolean contains(Object obj);
Rather, it supports:
boolean put(Foo key, Bar value);
boolean containsKey(Foo key);
boolean containsValue(Bar value);
Sample Map Usage
Map<String, String> dns = new HashMap<String, String>();
dns.put(“scotty.mit.edu”, “18.227.0.87”);
System.out.println(dns.get(“scotty.mit.edu”));
System.out.println(dns.containsKey(“scotty.mit.edu”));
System.out.println(dns.containsValue(“18.227.0.87”));
dns.remove(“scotty.mit.edu”);
System.out.println(dns.containsValue(“18.227.0.87”));
// => “18.227.0.87”, true, true, false
Other Useful Methods
keySet() - returns a Set of all the keys
values() - returns a Collection of all the values
entrySet() - returns a Set of Key,Value Pairs
Each pair is a Map.Entry object
Map.Entry supports getKey, getValue, setValue
Dangers of Key Mutability
A key must always be equal to what it was
This is a restatement of the Set discussion
If a key chanages, it and its value will be
“lost”
Bad, Strange Thing #3
Name isabella = new Name("Isabella", "Rosellini")
Map<Name, String> directory = new HashMap<Name, String>();
directory.put(isabella, "123-456-7890");
System.out.println(directory.get(isabella));
isabella.first = "Dennis";
System.out.println(directory.get(isabella));
directory.put(new Name("Isabella", "Rosellini"), "5 | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
));
isabella.first = "Dennis";
System.out.println(directory.get(isabella));
directory.put(new Name("Isabella", "Rosellini"), "555-555-1234")
isabella.first = "Isabella";
System.out.println(directory.get(isabella));
What happens?
Two Answers
Right Answer:
// => 123-456-7890, null, 555-555-1234
Righter Answer:
Doesn’t matter because we shouldn’t be doing
it
Unspecified behavior
How to Fix Mutable Keys?
We want to be able to use any object to
stand in for another
But mutable objects are dangerous
Copy the Key
Name dennis = new Name(“Dennis”, “Hopper”);
Name copy = new Name(dennis.first, dennis.last);
map.put(copy, “555-555-1234”);
Now changes to dennis don’t mess up map
But the keys themselves can still be changed
For (Name name : map.keySet()) {
name.first = “u r wrecked”; // uh oh
}
Make Immutable Keys
public class Name {
public final String first;
public final String last;
public Name(String first, String last) {
this.first = first;
this.last = last;
}
public boolean equals(Object o) {
return (o instanceof Name &&
((Name) o).first.equals(this.first) &&
((Name) o).last.equals(this.last));
}}
Immutable Proxy for Keys
Map<String, String> dir = new HashMap<String, String>();
Name naomi = new Name(“Naomi”, “Watts”);
String key = naomi.first + “,” + naomi.last;
dir.put(key, “888-444-1212”);
Strings are immutable, so our Maps will be safe
“Freeze” Keys
public class Name {
private String first;
private String last;
private boolean frozen = false;
…
public void set | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
Freeze” Keys
public class Name {
private String first;
private String last;
private boolean frozen = false;
…
public void setFirst(String s) {
if (!frozen) first = s;
}
… // do same with setLast
public void freeze() {
frozen = true;
}
}
Summary: Mutable Keys
Each approach has tradeoffs
But where appropriate, choose the simplest,
strongest solution
If a key cannot ever be changed, there will
never be problems
“Put and Pray” only as a lost resort
Collection Wrap-up
Common problems
Sharing obects between Collections
Trying to remove an Object during iteration
Mutable Keys, Sets
Any questions?
Comparing and Sorting
Used to decide, between two objects, if one
is bigger or they are equal
(a.compareTo(b)) should result in:
< 0 if a < b
= 0 if a = b
> 0 if a > b
Comparison Example
Integer one = 1;
System.out.println(one.compareTo(3));
System.out.println(one.compareTo(-50));
String frank = “Frank”;
System.out.println(frank.compareTo(“Booth”));
System.out.println(frank.compareTo(“Hopper”));
// => -1 , 1, 4, -2
Sorting a List Alphabetically
List<String> names = new ArrayList<String>();
names.add(“Sailor”);
names.add(“Lula”);
names.add(“Bobby”);
names.add(“Santos”);
names.add(“Dell”);
Collections.sort(names);
// names => [ “Bobby”, “Dell”, “Lula”, “Sailor”,
“Santos” ]
Comparable Interface
We can sort Strings because they implement
Comparable
That is, they have a “Natural Ordering”.
To make Foo class Comparable, we have to
implement:
| https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
� That is, they have a “Natural Ordering”.
To make Foo class Comparable, we have to
implement:
int compareTo(Foo obj);
A Sortable Name
public class Name implements Comparable<Name> {
…
public int compareTo(Name o) {
int compare = this.last.compareTo(o.last)
if (compare != 0)
return compare;
else return this.first.compareTo(o.first);
}
}
Sorting Names in Action
List<Name> names = new ArrayList<Name>();
names.add(new Name("Nicolas", "Cage"));
names.add(new Name("Laura", "Dern"));
names.add(new Name("Harry", "Stanton"));
names.add(new Name("Diane", "Ladd"));
names.add(new Name("William", "Morgan"));
names.add(new Name(“Dirty”, "Glover"));
names.add(new Name("Johnny", "Cage"));
names.add(new Name("Metal", "Cage"));
System.out.println(names);
Collections.sort(names);
System.out.println(names);
// => [Johnny Cage, Metal Cage, Nicolas Cage, Laura Dern, Crispin Glover,
Diane Ladd, William Morgan, Harry Stanton]
Comparator Objects
To create multiple sortings for a given Type,
we can define Comparator classes
A Comparator takes in two objects, and
determines which is bigger
For type Foo, a Comparator<Foo> has:
int compare(Foo o1, Foo o2);
A First-Name-First Comparator
public class FirstNameFirst implements
Comparator<Name> {
public int compare(Name n1, Name n2) {
int ret = n1.first.compareTo(n2.first);
if (ret != 0)
return ret;
else return n1.last.compareTo(n2.last);
}
}
This goes in a separate file, FirstNameFirst.java
Does it Work?
List<Name> names = new ArrayList<Name>();
..
Comparator<Name> first = new FirstNameFirst();
Collections.sort(names, first);
System.out.println(names);
// => [Crispin Glover, Diane Ladd, Harry Stanton, Johnny
Cage, Laura Dern, Metal | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
);
System.out.println(names);
// => [Crispin Glover, Diane Ladd, Harry Stanton, Johnny
Cage, Laura Dern, Metal Cage, Nicolas Cage, William
Morgan]
It works!
Comparison Contract
Once again, there are rules that we must
follow
Specifically, be careful when
(compare(e1, e2)==0) != e1.equals(e2)
With such a sorting, using SortedSet or
SortedMap will cause Bad, Strange Things
Another Way of Sorting
Use a TreeSet - automatically kept sorted!
Either the Objects in TreeSet must implement Comparable
Or give a Comparator Object when making the TreeSet
SortedSet<Name> names = new TreeSet<Name>(new
FirstNameFirst());
names.add(new Name("Laura", "Dern"));
names.add(new Name("Harry", "Stanton"));
names.add(new Name("Diane", "Ladd"));
System.out.println(names);
// => [Diane Ladd, Harry Stanton, Laura Dern]
Day 3 Wrap-Up
Ask questions!
There was more here than anyone could get
or remember
Think of what you want your code to do, and
the best way to express that
Read Sun’s Java Documentation:
http://java.sun.com/j2se/1.5.0/docs/api
No one can keep Java in their head
Everytime you code, have this page open | https://ocw.mit.edu/courses/6-092-java-preparation-for-6-170-january-iap-2006/050457e9d4f48a612421484aa1ee573c_lecture3.pdf |
6.776
High Speed Communication Circuits
Lecture 3
Wave Guides and Transmission Lines
Massachusetts Institute of
Technology
February 8, 2005
Copyright © 2005 by Hae-Seung Lee and Michael H. Perrott
Maxwell’s Equations in Free Space
Take Curl of (1):
E
×−∇=×∇×∇
⎛
⎜
⎝
µ
H
∂
t
∂
⎞
−=⎟
⎠
µ
∂
t
∂
(
×∇
H
)
From (2)
µ
∂
t
∂
(
×∇
H
)
=
µε
2
∂
t
∂
E
2
Vector identity + (3)
(5)
(6)
×∇×∇
E
(
⋅∇∇=
E
)
∇−
2
E
−∇=
2
E
(7)
H.-S. Lee & M.H. Perrott
MIT OCW
Simplified Maxwell’s Equations
Putting together (5), (6) and (7):
2
∇
E µε
+
2
∂
t
∂
E
2
0=
Similarly for H
2
∇
H µε
+
2
∂
t
∂
H
2
0=
For simplicity, assume only z-direction
2
∇
E
=
2
∂
z
∂
E
2
and
∇
2
H
=
2
∂
z
∂
H
2
(8)
(9)
(10)
H.-S. Lee & M.H. Perrott
MIT OCW
Solutions to Maxwell’s Equations
(10) reduces to
2
∂
z
∂
E
2
+
µε
2
∂
t
∂
E
2
0=
Similarly for H
2
∂
z
∂
H
2
+
µε
2
∂
t
∂
H
2
0=
(11)
(12)
(11) and (12) can be satisfied by any | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
t
∂
H
2
0=
(11)
(12)
(11) and (12) can be satisfied by any function in the form
(
zf
±
)vt
where
=v
1
µε
H.-S. Lee & M.H. Perrott
MIT OCW
Calculating Propagation Speed
(cid:131) The function f is a function of time AND position
(cid:131) Velocity calculation
vt
=
constant
±=
v
z
±
z
∂
t
∂
(cid:131) The solution propagates in the z or –z direction with a
velocity of
=v
1
µε
H.-S. Lee & M.H. Perrott
MIT OCW
Assume Sinusoidal Steady-State
E and H solutions are in the form
j
(
ω
t
±
z
v
)
Ae
j
(
t
ω
±
kz
)
=
Ae
Where
k
=
ω
v
=
µεω
H.-S. Lee & M.H. Perrott
MIT OCW
Assumptions
(cid:131) Orientation and direction
- E field is in x-direction and traveling in z-direction
- H field is in y-direction and traveling in z-direction
- In freespace:
Ex
x
y
z
Hy
direction
of travel
(cid:131) For transmission line (TEM mode)
x
y
b
direction
of travel
Hy
Ex
a
z
H.-S. Lee & M.H. Perrott
MIT OCW
Solutions
(cid:131) Fields change only in time and in z-direction
(cid:131) Implications:
H.-S. Lee & M.H. Perrott
MIT OCW
Evaluate Curl Operations in Maxwell’s Formula
(cid:131) Definition
H.-S. Lee & M.H. Perrott
MIT OCW
Evaluate Curl Operations in Maxwell’s Formula
(cid:131) Definition
(cid:131) Given the previous assumptions
H.-S. Lee & M.H. Perrott | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
(cid:131) Definition
(cid:131) Given the previous assumptions
H.-S. Lee & M.H. Perrott
MIT OCW
Now Put All the Pieces Together
(cid:131) Solve Maxwell’s Equation (1)
H.-S. Lee & M.H. Perrott
MIT OCW
Now Put All the Pieces Together
(cid:131) Solve Maxwell’s Equations (1) and (2)
H.-S. Lee & M.H. Perrott
MIT OCW
Freespace Values
(cid:131) Constants
(cid:131) Impedance
(cid:131) Propagation speed
(cid:131) Wavelength of 30 GHz
signal
H.-S. Lee & M.H. Perrott
MIT OCW
Voltage and Current
(cid:131) Definitions:
b
x
Hy
Ex
y
z
I
x
t
E
y
H
w
a
b
a
H.-S. Lee & M.H. Perrott
MIT OCW
Parallel Plate Waveguide
(cid:131) E-field and H-field are influenced by plates
b
Hy
Ex
a
x
y
z
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Assume that (AC) current is flowing
I
b
Hy
Ex
I
x
y
z
a
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Current flowing down waveguide influences H-field
I
b
Hy
Ex
a
I
H
x
y
z
x
y
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Flux from one plate interacts with flux from the other
plate
I
b
Hy
Ex
a
I
x
y
z
x
y
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Approx | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
y
H.-S. Lee & M.H. Perrott
MIT OCW
Current and H-Field
(cid:131) Approximate H-Field to be uniform and restricted to lie
between the plates
I
b
Hy
Ex
a
I
x
y
z
x
y
b
a
H.-S. Lee & M.H. Perrott
MIT OCW
Voltage and E-Field
(cid:131) Approximate E-field to be uniform and restricted to lie
between the plates
J
x
y
z
b
Hy
Ex
a
J
b
x
EV
a
y
H.-S. Lee & M.H. Perrott
MIT OCW
Back to Maxwell’s Equations
(cid:131) From previous analysis
(cid:131) These can be equivalently written as
(cid:131) Where
H.-S. Lee & M.H. Perrott
MIT OCW
Wave Equation for Transmission Line (TEM)
(cid:131) Key formulas
(cid:131) Substitute (2) into (1)
(cid:131) Characteristic impedance (use Equation (1))
H.-S. Lee & M.H. Perrott
MIT OCW
Connecting to the Real World
(cid:131) Typical of sinusoidal analysis usingphasors, the
solutions are complex
(cid:131) Take the real part of the solution to find the real-world
solution:
H.-S. Lee & M.H. Perrott
MIT OCW
Calculating Propagation Speed
(cid:131) The resulting cosine wave is a function of time AND
position
direction
of travel
t
Ex(z,t)
x
y
z
z
(cid:131) Consider “riding” one part of the wave
(cid:131) Velocity calculation
H.-S. Lee & M.H. Perrott
MIT OCW
Integrated Circuit Values
(cid:131) Constants
(cid:131) Impedance (geometry/material dependant)
H.-S. Lee & M.H. Perrott
MIT OCW
Integrated Circuit Values
(cid:131) Constants | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
/material dependant)
H.-S. Lee & M.H. Perrott
MIT OCW
Integrated Circuit Values
(cid:131) Constants
(cid:131) Impedance (geometry/material dependant)
(cid:131) Propagation speed (geometry independent, material
dependent)
(cid:131) Wavelength of 30 GHz signal in silicon dioxide
H.-S. Lee & M.H. Perrott
MIT OCW
LC Network Analogy of Transmission Line (TEM)
(cid:131) LC network analogy
L
L
L
L
Zin
C
C
C
(cid:131) Calculate input impedance
H.-S. Lee & M.H. Perrott
MIT OCW
LC Network Analogy of Transmission Line (TEM)
(cid:131) LC network analogy
L
L
L
L
Zin
C
C
C
(cid:131) Calculate input impedance
H.-S. Lee & M.H. Perrott
MIT OCW
How are Lumped LC and Transmission Lines Different?
(cid:131) In transmission line, L and C values are infinitely
small
- It is always true that
L
L
L
L
Zin
C
C
C
(cid:131) For lumped LC, L and C have finite values
- Finite frequency range for
H.-S. Lee & M.H. Perrott
MIT OCW
Lossy Transmission Lines
(cid:131) Practical transmission lines have losses in their
conductor and dielectric material
- We model such loss by including resistors in the LC
model
R
L
R
L
R
L
R
L
Zin
1/G
C
1/G
C
1/G
C
(cid:131) The presence of such losses has two effects on
signals traveling through the line
- Attenuation
- Dispersion (i.e., bandwidth degradation)
(cid:131) See textbook for analysis
H.-S. Lee & M.H. Perrott
MIT OCW | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/05126e43ce52f3dc8ba9809b0692eadb_lec3.pdf |
18.354J Nonlinear Dynamics II: Continuum Systems Lecture 2
Spring 2015
2 Dimensional analysis
Before moving on to more ‘sophisticated things’, let us think a little about dimensional
analysis and scaling. On the one hand these are trivial, and on the other they give a simple
method for getting answers to problems that might otherwise be intractable. The idea
behind dimensional analysis is very simple: Any physical law must be expressible in any
system of units that you use. There are two consequences of this:
• One can often guess the answer just by thinking about what the dimensions of the
answer should be, and then expressing the answer in terms of quantities that are
known to have those dimensions1.
• The scientifically interesting results are always expressible in terms of quantities that
are dimensionless, not depending on the system of units that you are using.
One example of a dimensionless number relevant for fluid dynamics that we have already
encountered in the introductory class is the Reynolds number, which quantifies the relative
strength of viscous and inertial forces. Another example of dimensional analysis that we will
study in detail is the solution to the diffusion equation for the spreading of a point source.
The only relevant physical parameter is the diffusion constant D, which has dimensions of
L2T −1. We denote this by writing [D] = L2T −1. Therefore the characteristic scale over
which the solution varies after time t must be Dt. This might seem like a rather simple
result, but it expresses the essence of solutions to the diffusion equation. Of course, we will
be able to solve the diffusion equation exactly, so this argument wasn’t really necessary. In
practice, however, we will rarely find useful exact solutions to the Navier-Stokes equations,
and so dimensional analysis will often give us insight before diving into the mathematics
or numerical simulations. Before formalising our approach, let us consider a few examples
where simple dimensional arguments intuitively lead to interesting results.
√
2.1 The pendulum
This is a trivial problem that you know quite well. Consider a pendulum with length L and
mass m, hanging in a gravitational field of strength g. What is | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
problem that you know quite well. Consider a pendulum with length L and
mass m, hanging in a gravitational field of strength g. What is the period of the pendulum?
time involving these numbers. The only
We need a way to construct a quantity with units of
(cid:112)
possible way to do this is with the combination
L/g. Therefore, we know immediately
that
τ = c(cid:112)L/g.
This result might seem trivial to you, as you will probably remember (e.g., from a previous
course) that c = 2π, if one solves the full dynamical problem for for small amplitude
oscillations. However, the above formula works even for large amplitude oscillations.
(23)
2.2 Pythagorean theorem
Now we try to prove the Pythagorean theorem by dimensional analysis. Suppose you are
given a right triangle, with hypotenuse length L and smallest acute angle φ. The area of
1Be careful to distinguish between dimensions and units. For example mass (M ), length (L) and time
(T ) are dimensions, and they can have different units of measurement (e. g. length may be in feet or meters)
9
the triangle is clearly
A = A(L, φ).
Since φ is dimensionless, it must be that
A = L2f (φ),
(24)
(25)
where f is some function we don’t know.
Now the triangle can be divided into two little right triangles by dropping a line from
the right angle which is perpendicular to the hypotenuse. The two right triangles have
hypotenuses that happen to be the other two sides of our original right triangle, let’s call
them a and b. So we know that the areas of the two smaller triangles are a2f (φ) and b2f (φ)
(where elementary geometry shows that the acute angle φ is the same for the two little
triangles as the big triangle). Moreover, since these are all right triangles, the function f is
the same for each. Therefore, since the area of the big triangle is just the sum of the areas
of the little ones, we have
or
L2f = a2f + b2f,
L2 = a2 + b2.
( | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
of the little ones, we have
or
L2f = a2f + b2f,
L2 = a2 + b2.
(26)
2.3 The gravitational oscillation of a star
It is known that the sun, and many other stars undergo some mode of oscillation. The
question we might ask is how does the frequency of oscillation ω depend on the properties
of that star? The first step is to identify the physically relevant variables. These are the
density ρ, the radius R and the gravitational constant G (as the oscillations are due to
gravitational forces). So we have
ω = ω(ρ, R, G).
(27)
The dimensions of the variables are [ω] = T −1, [ρ] = M L−3, [R] = L and [G] = M −1L3T −2.
The only way we can combine these to give as a quantity with the dimensions of time, is
through the relation
(cid:112)ω = c Gρ.
(28)
Thus, we see that the frequency of oscillation is proportional to the square root of the density
and independent of the radius. The determination of c requires a real stellar observation, but
we have already determined a lot of interesting details from dimensional analysis alone. For
the sun, ρ = 1400kg/m3, giving ω ∼ 3 × 10−4s−1. The period of oscillation is approximately
1 hour, which is reasonable. However, for a neutron star (ρ = 7 × 1011kgm−3) we predict
ω ∼ 7000s−1, corresponding to a period in the milli-second range.
2.4 The oscillation of a droplet
What happens if instead of considering a large body of fluid, such as a star, we consider a
smaller body of fluid, such as a raindrop. Well, in this case we argue that surface tension γ
10
provides the relevant restoring force and we can neglect gravity. γ has dimensions of en-
ergy/area, so that [γ] = M T −2. The only quantity we can now make with the dimensions | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
. γ has dimensions of en-
ergy/area, so that [γ] = M T −2. The only quantity we can now make with the dimensions
of T −1 using our physical variables is
(cid:114)
ω = c
γ
ρR3
,
(29)
which is not independent of the radius. For water γ = 0.07Nm−1 giving us a characteristic
frequency of 3Hz for a raindrop.
One final question we might ask ourselves before moving on is how big does the droplet
have to be for gravity to have an effect? We reason that the crossover will occur when the
two models give the same frequency of oscillation. Thus, when
we find that
(cid:112)ρG =
(cid:114) γ
ρR3
Rc ∼
(cid:19) 1
3
(cid:18) γ
ρ2G
(30)
(31)
This gives a crossover radius of about 10m for water.
2.5 Water waves
This is a subject we will deal with in greater detail towards the end of the course, but for
now we look to obtain a basic understanding of the motion of waves on the surface of water.
For example, how does the frequency of the wave depend on the wavelength λ? This is
called the dispersion relation.
If the wavelength is long, we expect gravity to provide the restoring force, and the
relevant physical variables in determining the frequency would appear to be the mass density
ρ, the gravitational acceleration g and the wave number k = 2π/λ. The dimensions of these
quantities are [ρ] = M L−3, [g] = LT −2 and [k] = L−1. We can construct a quantity with
the dimensions of T −1 through the relation
(cid:112)ω = c
gk.
(32)
We see that the frequency of water waves is proportional to the square root of the wavenum-
ber, in contrast to light waves for which the frequency is proportional to the wavenumber.
As with a droplet, we might worry about the effects of surface tension when the wave-
length gets small. In this case we replace g with | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
with a droplet, we might worry about the effects of surface tension when the wave-
length gets small. In this case we replace g with γ in our list of physically relevant variables.
Given that [γ] = M T −2, the dispersion relation must be of the form
(cid:112)
ω = c γk3/ρ,
(33)
which is very different to that for gravity waves. If we look for a crossover, we find that the
frequencies of gravity waves and capillary waves are equal when
(cid:112)k ∼ ρg/γ.
(34)
This gives a wavelength of 1cm for water waves.
11
MIT OpenCourseWare
http://ocw.mit.edu
18.354J / 1.062J / 12.207J Nonlinear Dynamics II: Continuum Systems
Spring 2015
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-354j-nonlinear-dynamics-ii-continuum-systems-spring-2015/051766b6177c971b4cd0f005ddfdfe17_MIT18_354JS15_Ch2.pdf |
IV.E Perturbative RG (First Order)
The last section demonstrated how various expectation values associated with the
Landau–Ginzburg Hamiltonian can be calculated perturbatively in powers of u. However,
the perturbative series is inherently divergent close to the critical point and cannot be used
to characterize critical behavior in dimensions d ≤ 4. Wilson showed that it is possible to
combine perturbative and renormalization group approaches into a systematic method for
calculating critical exponents. Accordingly, we shall extend the RG calculation of Gaussian
ddxm4 as a
model in sec.III.G to the Landau–Ginzburg Hamiltonian, by treating U = u
perturbation.
�
1. Coarse Grain: This is the most difficult step of the RG procedure. As before, subdivide
the fluctuations into two components as,
~˜m(q)
for 0 < q < Λ/b
~σ(q)
for Λ/b < q < Λ
m(q)
~
=
In the partition function,
.
(IV.28)
Z =
�
Dm˜ (q)D~
~
σ(q) exp −
�
0
�
Λ
ddq
(2π)d
t + Kq2
2
�
�
�
m(q)|2
| ˜
+ |σ(q)|2
− U[m˜ (q), ~
~
σ(q)]
,
�
�
(IV.29)
the two sets of modes are mixed by the operator U . Formally, the result of integrating out
{~σ(q)} can be written as
Dm˜ (q) exp
~
−
Z = | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
integrating out
{~σ(q)} can be written as
Dm˜ (q) exp
~
−
Z =
�
exp −
�
nV
2
�
0
�
Λ ddq
Λ/b (2π)d
�
Λ/b
ddq
(2π)d
t + Kq2
2
�
m(q)|2
| ˜
×
�
�
(IV.30)
ln
t + Kq2
−U [m,~~˜ σ]
e
�
�
� �
≡
�
σ
�
Dm˜ (q)e
~
−βH˜ [ ~˜
m]
.
Here we have defined the partial averages
hOiσ ≡
�
q)
σ(
D~
Z
σ
O exp −
�
Λ
d
Λ/b (2
�
dq
)d
π
t +
Kq2
2
�
|σ(q)|2
,
�
�
(IV.31)
with Zσ = D~σ(q) exp{−βH0[~σ]}, being the Gaussian partition function associated with
the short wavelength fluctuations. From eq.(IV.30), we obtain
�
β˜H[ ~˜ = V δfb
m]
0 +
Λ/b
d
(2
dq
)d
π
t +
Kq2
2
�
�
� 0
m(q)|2 − ln
| ˜
−U [ ~˜ σ]
m,~
e
�
(IV.32)
.
σ
�
59
The final expression can be calculated perturbatively as,
ln
−U
e
�
σ
�
= − hU iσ +
1
2
U 2 − hUi2
σ
σ
+· · · +
�
�
�
�
(−1)ℓ | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
2
U 2 − hUi2
σ
σ
+· · · +
�
�
�
�
(−1)ℓ
ℓ!
×ℓth cumulant of U +· · · . (IV.33)
The cumulants can be computed using the rules set in the previous sections. For example,
at the first order we need to compute
= u
U m, ~˜ σ
~
��
� �
~˜
m(q1) + ~
σ
�
· m˜ (q2) + ~
~
σ(q2)
σ(q1)
ddq1ddq2ddq3ddq4
(2π)4d
(2π)dδd(q1 + q2 + q3 + q4)
m˜ (q3) + ~
~
σ(q3) m˜ (q4) + ~
· ~
σ(q4)
��
� �
� �
� �
The following types of terms result from expanding the product:
.
(IV.34)
σ
��
[1]
[2]
[3]
[4]
[5]
[6]
1
4
2
4
4
1
m˜ (q1) · ~˜
~
~˜
m(q2) m(q3) ~
· m˜ (q4)
�
σ(q1) · ~˜
~
~˜
m(q2) m(q3) ~
· m˜ (q4)
�
σ(q1) · ~
~
~˜
σ(q2) m(q3) ~
· m˜ (q4)
�
σ(q1) m˜ (q2) ~
· ~
~
· ~˜
σ(q3) m(q4)
�
σ(q1) ~
~
· σ(q2) ~
σ(q3) ~˜
· m(q4)
�
�
h~σ(q1) · | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
σ(q2) ~
σ(q3) ~˜
· m(q4)
�
�
h~σ(q1) · ~σ(q2) ~σ(q3) · ~σ(q4)iσ
σ
�
σ
�
σ
(IV.35)
.
σ
σ
�
�
The second element in each line is the number of terms with the a given ‘symmetry’.
The total of these coefficients is 24 = 16. Since the averages hOiσ, involve only the short
wavelength fluctuations, only contractions with ~σ appear. The resulting internal momenta
are integrated from Λ/b to Λ.
Term [1] has no ~σ factors and evaluates to U[m˜ ]. ~ The second and fifth terms involve
an odd number of ~σs and their average is zero. Term [3] has one contraction and evaluates
to
− u × 2
− 2nu
ddq1 · · · ddq4 (2π)dδd(q1 + · · ·
(2π)4d
+ q4)
δjj (2π)dδd(q1 + q2)
2
t + Kq1
m˜ (q3) · m˜ (q4)
~
~
=
|m˜ (q)|2
Λ ddk
Λ/b (2π)d t + Kk2
1
.
�
(IV.36)
�
Λ/b ddq
(2π)d
0
�
Term [4] also has one contraction but there is no closed loop (the factor δjj) and hence no
factor of n. The various contractions of 4 ~σ in term [ | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
) and hence no
factor of n. The various contractions of 4 ~σ in term [6] lead to a number of terms with
60
no dependence on ~˜ We shall denote the sum of these terms by uV δfb
1 .
terms, the coarse grained Hamiltonian at order of u is given by
m.
Summing up all
β˜H[ ~˜ =V
m]
δf 0
1
b + uδfb
Λ/b
+
d
(2
dq
)d
π
t˜+
Kq2
2
m(q)|2
| ˜
0
�
�
Λ/b ddq1
dd
q
2
)3
π
(2
d
�
+ u
0
�
�
�
ddq3 m~˜ (q1) · m~˜ (q2)m~˜ (q3) · m˜ (−q1 − q2 − q3)
~
,
(IV.37)
where
Λ ddk
Λ/b (2π)d t + Kk2
�
The coarse grained Hamiltonian is thus described by the same 3 parameters t, K, and u.
t˜ = t + 4u(n + 2)
(IV.38)
1
.
The other two parameters in the coarse grained Hamiltonian are unchanged, i.e.
K˜ = K,
and u˜ = u.
(IV.39)
2. Rescale by setting q = b−1q ′ , and
3. Renormalize, m˜ = z ~ ′ , to get
~
m
(βH)
′
′
[m
] =V
δfb
1
0 + uδfb | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
~
m
(βH)
′
′
[m
] =V
δfb
1
0 + uδfb
�
+ uz
4b
−3d
0
�
Λ
+
′
dd
q
d b
(2
)
π
0
�
�
′
′ ddq3
′ ddq2
Λ ddq1
(2π)3d
−d z 2
�
′
(q
1)
′
m
~
t˜+ Kb
2
−2q ′2
′
|m
(q ′
)|2
�
.
′
m
· ~
(q ′
′
2) ~
m
′
(q
3)
′
m
· ~
(−q ′
′
1 − q2 − q3)
′
(IV.40)
The renormalized Hamiltonian is characterized by the triplet of interactions (t ′, K ′ , u ′ ),
such that
′
t
= b
−d z 2t, ˜ K
′
= b
−d−2 z 2K,
′
u
= b
−3d z 4 u.
(IV.41)
As in the Gaussian model there is a fixed point at t ∗ = u ∗ = 0, provided that we set
b1+ d
2 , such that K = K. The recursion relations for t and u in the vicinity of this
′
z =
point are given by
′
tb
= b2
�
t + 4u(n + 2)
′
ub = b4−d u
Λ ddk
Λ/b (2π)d t + Kk2
�
1
�
. | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
Λ ddk
Λ/b (2π)d t + Kk2
�
1
�
.
(IV.42)
While the recursion relation for u at this order is identical to that obtained by dimensional
analysis; the one for t is different. It is common to convert the discrete recursion relations
to continuous differential equations by setting b = eℓ, such that for an infinitesimal δℓ,
′
b ≡ t(b) = t(1 + δℓ) = t + δℓ
t
dt
dℓ
+ O(δℓ2)
,
′
b ≡ u(b) = u + δℓ
u
du
dℓ
+ O(δℓ2).
61
Expanding eqs.(IV.42) to order of δℓ, gives
t + δℓ
u + δℓ
dt
dℓ
du
dℓ
= (1 + 2δℓ)
t + 4u(n + 2)
�
= (1 + (4 − d)δℓ) u
Sd
1
(2π)d t + KΛ2
Λdδℓ
�
.
(IV.43)
The differential equations governing the evolution of t and u under rescaling are then
dt
dℓ
du
dℓ
= 2t +
4u(n + 2)KdΛd
t + KΛ2
= (4 − d)u
.
(IV.44)
The recursion relation for u is easily | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
= (4 − d)u
.
(IV.44)
The recursion relation for u is easily integrated to give u(ℓ) = u0e(4−d)ℓ = u0b(4−d).
The recursion relations can be linearized in the vicinity of the fixed point t ∗ = u ∗ = 0,
by setting t = t ∗ + δt and u = u ∗ + δu, as
d
dℓ
δt
δu
=
� � �
4(n+2)KdΛ
d−2
K
4 − d
2
0
δt
δu
� � �
(IV.45)
In the differential form of the recursion relations, the eigenvalues of the matrix determine
the relevance of operators. Since the above matrix has zero elements on one side, its
eigenvalues are the diagonal elements, and as in the Gaussian model we can identify yt = 2,
and yu = 4 − d. The results at this order are identical to those obtained from dimensional
analysis on the Gaussian model. The only difference is in the eigen–directions. The
exponent yt = 2 is still associated with u = 0, while yu = 4 − d is actually associated
with the direction t = −4u(n + 2)KdΛd−2/K. This agrees with the shift in the transition
temperature calculated to order of u from the susceptibility.
For d > 4 the Gaussian fixed point | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
calculated to order of u from the susceptibility.
For d > 4 the Gaussian fixed point has only one unstable direction associated with yt.
It thus correctly describes the phase transition. For d < 4 it has two relevant directions
and is unstable. Unfortunately, the recursion relations have no other fixed point at this
order and it appears that we have learned little from the perturbative RG. However, since
we are dealing with an alternating series we can anticipate that the recursion relations at
the next order are modified to
dt
dℓ
du
dℓ
=
2t +
4u(n + 2)KdΛd
t + KΛ2
− Au2
= (4 − d)u − Bu2
62
,
(IV.46)
with A and B positive. There is now an additional fixed point at u ∗ = (4 − d)/B for
d < 4. For a systematic perturbation theory we need to keep the parameter u small. Thus
the new fixed point can be explored systematically only for small ǫ = 4 − d; we are led to
consider an expansion in the dimension of space in the vicinity of d = 4! For a calculation
valid at O(ǫ) we have to keep track of terms of second order in the recursion relation for
u, but only to first order in that of t. | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
the recursion relation for
u, but only to first order in that of t. It is thus unnecessary to calculate the term A in the
above recursion relation.
63
MIT OpenCourseWare
http://ocw.mit.edu
8.334 Statistical Mechanics II: Statistical Physics of Fields
Spring 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-334-statistical-mechanics-ii-statistical-physics-of-fields-spring-2014/051af10c926b5c8b2ceafc7257669a65_MIT8_334S14_Lec10.pdf |
D-Lab
Spring
2010
Development through
Dialogue, Design and Dissemination
1
Today’s Class
• Logistics
• Design Box Presentations
• Design, Innovation, Invention and the
Design Process
• Discussion
– Readings
• Case Studies
2Some Logistics
• Turning in Homework
• Course website
• Textbooks
3
Technology Boxes
• Which one is your favorite?
• Which one exempifies the trade-offs
that were made
• 2 minutes or less!
4
Design, Innovation
and Invention
5invent: to be the first to think of, make, or
use something
design: to work out or create the form or
structure of something
Source: Encarta® World English Dictionary © 1999 Microsoft Corporation.
All rights reserved. Developed for Microsoft by Bloomsbury
Publishing Plc. This content is excluded from our Creative Commons license. For more
information, see http://ocw.mit.edu/fairuse .
6
7Innovation
Clear plastic bottles poking through
roof capture sunlight to illuminate
windowless rooms
http://www.youtube.com/watch?v=C
S3764DmIP4
8Harder problems lead to
better inventions
Shawn Frayne
9Challenges in Design
• Tradeoffs
• Dynamics and long-term effects of use
• Details
• Time Pressures
• Economics
• Use and mis-use
• Ethics
10
The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
11The Creativity Caveat
• Don’t let the process detract
from the product
12The Changing Approach
13The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
14The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
15The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
16Design Specifications
• Translate customer needs into
quantitative design performance
targets
• Define internal basis for measuring
success
• Capture the necessary characteristics
for a successful product
• Provide a basis for resolving trade-offs
17Translating Customer
Needs
Need DesignAttribute Units Owner Easy assembly Assembly time seconds Floyd Safe Structural safetyfactor Lisa Safe Fatigue life cycles Nathan Magical Works like magic subjective Meta 18The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
19Brainstorming Method
• generate lots of ideas
• explore all classes of solutions
• develop new perspectives
• generate usable information
20Brainstorming Rules
• Defer judgment
• Build upon the ideas of others
• One conversation at a time
• Stay focused on the topic
• Encourage wild ideas
21
The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
22The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
23Pugh Chart
24The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
25The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
26The Design Process
• Information Gathering
• Problem | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
ation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
26The Design Process
• Information Gathering
• Problem Definition
• Design Specifications
• Idea Generation
• Analysis & Experimentation
• Concept Evaluation
• Detail Design
• Fabrication
• Testing & Evaluation
27The Design Process
Get feedback
Test
Problem
Gather Information
Think of ideas
Solution
Build
Work out details
Experiment
Choose the best idea
28Design for Developing
Countries
29criteria
“Brute force engineering options
often meet
but
the
somewhere there is a profound
solution, which is simple, cheap,
and beautiful. Hold out for this as
possible.”
long
as
-Kurt
former D-Lab Instructor
Kornbluth
303132Battery-operated
field incubator
$1250
Thermo-electric
field incubator
$500
Commercial incubator photos (left and center) © source unknown.
All rights reserved. This content is excluded from our Creative
Commons license. For more information, see http://ocw.mit.edu/fairuse.
Phase change
incubator
$100
33The Phase Change Incubator
Liquid
Sol
id
020406080100120TimeTemperature•••3435Guiding Principles for
DfDC
• Identify functional requirements
• Encourage participatory development
• Value indigenous knowledge
• Promote local innovation
• Strive for sustainability
36Technology Case
Studies
37Coming up…
• Project Selection (Mar 1)
– Design challenge descriptions due for review by
Wednesday, Feb 17
– Slides due by noon on Wednesday, Feb 24
• Readings on course website
• Homework 1 (due Feb 10)
• Homework 3 (due Feb 10)
38MIT OpenCourseWare
http://ocw.mit.edu
EC.720J / 2.722J D-Lab II: Design
Spring | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
Ware
http://ocw.mit.edu
EC.720J / 2.722J D-Lab II: Design
Spring 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/ec-720j-d-lab-ii-design-spring-2010/0544f53d0d8501e7ea1e16d5dba634ba_MITEC_720JS10_lec02.pdf |
6.897: Selected Topics in Cryptography
Lectures 9 and 10
Lecturer: Ran Canetti
Highlights of past lectures
Presented two frameworks for analyzing protocols:
• A basic framework:
– Only function evaluation
– Synchronous
– Non-adaptive corruptions
– Modular composition (only non-concurrent)
• A stronger framework (UC):
– General reactive tasks
– Asynchronous (can express different types of synchrony)
– Adaptive corruptions
– Concurrent modular composition (universal composition)
Review of the definition:
Ideal process:
Z
Z
Protocol execution:
P1
P2
S
P1
P2
A
P3
P4
P3
P4
F
Protocol P securely realizes F if:
For any adversary A
There exists an adversary S
Such that no environment Z can tell
whether it interacts with:
- A run of π with A
- An ideal run with F and S
Lectures 9 and 10
UC Commitment and Zero-Knowledge
• Quick review of known feasibility results in the UC
framework.
• UC commitments: The basic functionality, Fcom.
•
Impossiblity of realizing Fcom in the plain model.
• Realizing Fcom in the common reference string model.
• Multiple commitments with a single string:
– Functionality Fmcom.
– Realizing Fmcom.
• From UC commitments to UC ZK:
Realizing Fzk in the Fcom-hybrid model.
Questions:
• How to write ideal functionalities that
adequately capture known/new tasks?
do
• Are known protocols UC-secure?
(Do these protocols realize the ideal functionalities
associated with the corresponding tasks?)
• How to design UC-secure protocols?
zcyk02]
Existence results: Honest majority
Multiparty protocols with honest majority:
Thm: Can realize any functionality [C. 01].
(e.g. use the protocols of
Rabin-BenOr89,Canetti-Feige-Goldreich-Naor96]).
[BenOr-Goldw | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
protocols of
Rabin-BenOr89,Canetti-Feige-Goldreich-Naor96]).
[BenOr-Goldwasser-Wigderson88,
Two-party functionalities
•
Known protocols do not work.
(“black-box simulation with rewinding” cannot be used).
• Many interesting functionalities (commitment, ZK,
•
coin tossing, etc.) cannot be realized in plain model.
In the “common random string model” can do:
– UC Commitment
[Canetti-Fischlin01,Canetti-Lindell-Ostrovsky-Sahai02,Damgard-Nielsen02,
Damgard-Groth03,Hofheinz-QuedeMueler04].
– UC Zero-Knowledge [CF01, DeSantis et.al. 01]
– Any two-party functionality [CLOS02,Cramer-Damgard-
Nielsen03]
(Generalizes to any multiparty functionality with any
number of faults.)
UC Encryption and signature
•
•
Can write a “digital signature functionality” Fsig. Realizing Fsig
is equivalent to “security against chosen message attacks”
as in [Goldwasser-Micali-Rivest88].
–
Using Fsig, can realize “ideal certification authorities” and “ideally
authenticated communication”.
Can write a “public key encryption functionality”, Fpke.
Realizing Fpke w.r.t. non-adaptive adversaries is equivalent
to “security against chosen ciphertext attacks (CCA)” as in
[Rackoff-Simon91,Dolev-Dwork-Naor91,…].
–
Can formulate a relaxed variant of Fpke, that still captures most of the
current applications of CCA security.
– What about realizing Fpke w.r.t. adaptive adversaries?
•
•
As is, it’s impossible.
Can relax Fpke a bit so that it becomes possible (but still very
complicated) [Canetti-Halevi-Katz04]. How to do it simply?
UC key-exchange and secure channels
• Can write ideal functionalities that capture
Key-Exchange and Secure-Channels.
• Can show | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
UC key-exchange and secure channels
• Can write ideal functionalities that capture
Key-Exchange and Secure-Channels.
• Can show that natural and practical protocols
are secure: ISO 9798-3, IKEv1, IKEv2,
SSL/TLS,…
• What about password-based key exchange?
• What about modeling symmetric encryption and
message authentication as ideal functionalities?
UC commitments
The commitment functionality, Fcom
1. Upon receiving (sid,C,V,“commit”,x) from
(sid,C), do:
1. Record x
2. Output (sid,C,V, “receipt”) to (sid,V)
3. Send (sid,C,V, “receipt”) to S
2. Upon receiving (sid,“open”) from (sid,C), do:
1. Output (sid,x) to (sid,V)
2. Send (sid,x) to S
3. Halt.
Note: Each copy of Fcom is used for a single commitment/decommitment
Only. Multiple commitments require multiple copies of Fcom.
Impossibility of realizing Fcom in the plain model
Fcom
can be realized:
– By a “trivial” protocol that never generates any output.
(The simulator never lets Fcom to send output to any party.)
– By a protocol that uses third parties as “helpers”.
(cid:198) A protocol is:
– Terminating, if when run between two honest parties, some
output is generated by at least one party.
– Bilateral, if only two parties participate in it.
Theorem: There exist no terminating, bilateral protocols
that securely realize Fcom in the plain real-life model.
(Theorem holds even in the Fauth-hybrid model.)
Proof Idea:
Let P be a protocol that realizes Fcom in the plain model,
and let S be an ideal-process adversary for P, for the
case that the commiter is corrupted.
Recall that S has to explicitly give the committed bit to
Fcom before the opening phase begins. This means that
S must be able to somehow “extract” the committed
value b from the corrupted committer.
However, in the UC framework S has no | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
means that
S must be able to somehow “extract” the committed
value b from the corrupted committer.
However, in the UC framework S has no advantage over a
real-life verifier. Thus, a corrupted verifier can essentialy
run S and extract the committed bit b from an honest
committer, before the opening phase begins, in
contradiction to the secrecy of the commitment.
More precisely, we proceed in two steps:
(I) Consider the following environment Zc and real-life adversary Ac
that controls the committer C:
– Ac is the dummy adversary: It reports to Zc any message received
from the verifier V, and sends to V any message provided by Zc.
– Zc chooses a random bit b, and runs the code of the honest C by
instructing Ac to deliver all the messages sent by C.
Once V outputs “receipt”, Zc runs the opening protocol of C with V, and
outputs 1 if the output bit b’ generated by V is equal to b.
From the security of P there exists an ideal-process adversary Sc
such that IDEALFcom
Sc,,Zc ~ EXECP,Ac,Zc. But:
–
In the real-life mode, b’, the output of V, is almost always the same as
the bit b that secretly Z chose.
– Consequently, also in the ideal process, b’=b almost always.
– Thus, the bit b’’ that S provides Fcom at the commitment phase is
almost always equal to b.
(II) Consider the following environment Zv and real-life adversary Av
that controls the verifier V:
– Zv chooses a random bit b, gives b as input to the honest commiter,
and outputs 1 if the adversary output a bit b’=b.
– Av runs Sc. Any message received from C is given to Sc, and any
message generated by Sc is given to C. When Sc outputs a bit b’ to be
given to Fcom, Av outputs b’ and halts.
Notice that the view of Sc when run by Av is identical to its view when
interacting with Zc in the ideal process for Fcom. Consequently,
from part (I) we have that in the run | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
its view when
interacting with Zc in the ideal process for Fcom. Consequently,
from part (I) we have that in the run of Zv and Av almost always
b’=b.
However, when Zv interacts with any simulator S in the ideal process
for Fcom, the view of S is independent of b. Thus Zv outputs 1 w.p.
at most ½.
This contradicts the assumption that P securely realizes Fcom.
The common reference string functionality
Functionality Fcrs
(with prescribed distribution D)
1. Choose a value r from distribution D, and
send r to the adversary.
2. Upon receiving (“CRS”,sid) from party P,
send r to P.
Note: The Fcrs-hybrid model is essentially the “common reference string
model”, as usually defined in the literacture (cf., Blum-Feldman-Micali89).
In particular: An adversary in the Fcrs-hybrid model expects to get the value of
the CRS from the ideal functionality. Thus, in a simulated interaction, the
simulator can choose the CRS by itself (and in particular it can know trapdoor
information related to the CRS).
Theorem: If trapdoor permutation pairs exist then there
exist terminating, bilateral protocols that realize Fcom
in the (Fauth,Fcrs)-hybrid model.
Remarks:
•
•
Here we’ll only show the [CF01] construction, that is
based on claw-free pairs of trapdoor permutations.
[DG03] showed that UC commitments imply key
exchange, so no black-box constructions from OWPs
exist.
• More efficient constructions based on Paillier’s
assumption exist [DN02, DG03, CS03].
Realizing Fcom in the Fcrs-hybrid model
• Roughly speaking, we need to make sure
that the ideal model adversary for Fcom
can:
– Extract the committed value from a corrupted
committer.
– Generate commitments that can be opened in
multiple ways.
– Explain internal state of committer and verifier
upon corruption (for adaptive security).
First attempt
• To obtain equivocability:
– | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
– Explain internal state of committer and verifier
upon corruption (for adaptive security).
First attempt
• To obtain equivocability:
– Let f={f0, f1, f0
-1, f1
permutations. That is:
-1} be a claw-free pair of trapdoor
• f0, f1 are over the same domain.
• Given fi and x it is easy to compute f (x).
• Given fi
• Given only f0, f1, it is hard to find x0, x1 such that f0 (x0)=f1 (x1).
-1 and x it is easy to compute f -1(x).
i
i
– Commitment Scheme:
• CRS: f0,f1
• To commit to bit b, choose random x in the domain of f
and send fb(x). To open, send b,x.
– Simulator chooses the CRS so that it knows the trapdoors f0
Now can equivocate: find x0,x1 s.t. f0(x0)=f1(x1)=y, send y.
-1,f1 .
-1
• But: Not extractable…
Second attempt
• To add extractability:
– Let (G,E,D) be a semantically secure encryption scheme.
– Commitment Scheme:
• Let G(k)=(e,d). CRS: f0,f1, e.
• To commit to a bit b, choose random x,r, and send fb(x),Ee(r,x).
To open, send b,x,r.
– Simulator knows choose the CRS such that it knows the
decryption key d. So it can decrypt and extract b.
• But: lost equivocability…
Third attempt
• To restore equivocability:
– Scheme:
• CRS: f0,f1, e
• To commit to b:
– choose random x,r0,r1
– send fb(x),Ee(rb,x),Ee(r1-b,0)
• To open, send b,x,rb. (Don’t send r1-b.)
– To extract, simulator decrypts both encryptions and
find | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
To open, send b,x,rb. (Don’t send r1-b.)
– To extract, simulator decrypts both encryptions and
finds x.
– To equivocate, simulator chooses x0,x1,r0,r1, such
that f0(x0)=f1(x1)=y and sends y,Ee(r0,x0),Ee(r1,x1).
The protocol (UCC) for static adversaries
• On input (sid,C,V,“commit”,b) C does:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
– Compute y= fb(x), cb=Ee(rb,x), c1-b=Ee(r1-b,0), and send
(sid,C,V,y,c0,c1) to V.
• When receiving (sid,C,V,y,c0,c1) from C, V outputs
(sid,C,“receipt”,C).
• On input (sid,“open”), C does:
– Send b,x,rb to V.
• Having received b,x,r, V verifies that Fb(x)=y and
cb=Ee(r,x). If verification succeeds then output
(“Open”,sid,cid,C,b). Else output nothing.
Proof of security (static case)
Let A be an adversary that interacts with parties running
protocol UCC in the Fcrs-hybrid model.
We construct a simulator S in the ideal process for Fcom
and show that for any environment Z,
IDEALFcom
S,Z ~ EXECucc,A,Z
Simulator S:
• Choose a c.f.p. (f0, f1, f0
• Run a simulated copy of A and give it the CRS (f0, f1, e).
• All messages between A and Z are relayed unchanged.
•
-1, f1
If the committer C is uncorrupted:
–
-1) and keys (e,d) for the enc. Scheme.
If S is notified by Fcom that C wishes to commit to party V then simulate
-1(y),
for A a commitment from C to V: Choose y, compute x0=f0 | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
to commit to party V then simulate
-1(y),
for A a commitment from C to V: Choose y, compute x0=f0
c0=Ee(r0,x0), c1=Ee(r1,x1), and send (y, c0, c1) from C to V. When A
delivers this message to V, send “ok” to Fcom.
If S is notified by Fcom that C opened the commitment to value b, then S
simulates for A the opening message (b, xb, rb) from C to V.
-1(y),x1= f1
–
•
If C is corrupted:
–
–
•
If a corrupted C sends a commitment (y, c0, c1) to V, then S decrypts c0 and c1:
-1(y), then send (sid,C,V,“commit”,0) to Fcom.
-1(y), then send (sid,C,V,“commit”,1) to Fcom.
If c0 decrypts to x0 where x0=f0
If c1 decrypts to x1 where x1=f1
•
-1(y) and cb’=Ee(r,x)),
If C sends a valid opening message (b’,x,r) (I.e., x=fb’
then S checks whether b’ equals the bit sent to Fcom. If yes, then S sends
(sid, “Open”) to Fcom. Otherwise, S aborts the simulation.
Analysis of S:
Let Z be an environment. define first the following hybrid interaction HYB:
Interaction HYB is identical to IDEALFcom
S,Z, except that when S generates
commitments by uncorrupted parties, it “magically learns” the real bit b,
and then uses real (not fake) commitments. That is, the commitment is
(y, c0, c1) where c1-b=Ee(r1-b,0).
We proceed in two steps:
1.
2.
Show that EXECucc,A,Z ~ HYB.
This is done by reduction to the security of the claw-free pair.
Show that HYB ~ IDEALFcom
S,Z.
This is done | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
This is done by reduction to the security of the claw-free pair.
Show that HYB ~ IDEALFcom
S,Z.
This is done by reduction to the semantic security of the encryption
scheme.
Step 1: Show that EXECucc,A,Z ~ HYB:
•
•
Note that the interactions EXECucc,A,Z and HYB are identical, as long
as the adversary does not abort in an opening of a commitment made
by a corrupted party.
We show that if S aborts with probability p then we can find claws in
(f0, f1) With probability p. That is, construct the following adv. D:
–
Given (f0, f1), D simulates an interaction between Z and S (running A) when
the c.f.p. in the CRS is (f0, f1). D plays the role of S for Z and A. Since D
sees all the messages sent by Z, it knows the bits committed to be the
uncorrupted parties, and can simulate the interaction perfectly.
Furthermore, whenever S aborts then D finds a claw in (f0, f1): S
aborts if A provides a valid commitment to a bit b and then a valid
opening to 1-b. But in this case A generated a claw!
Step 2: Show that HYB ~ IDEALFcom
S,Z:
Recall that the difference between HYB and IDEALFcom
the commitments generated by S are real, whereas in IDEALFmcom
these commitments are fake.
S,Z is that in HYB
S,Z
Assume an env. Z and adv. A that distinguish between the two interactions.
Construct an adversary B that breaks the semantic security of (E,D):
Given encryption key e, B simulates an interaction between Z and S (running
A) when the encryption key in the CRS is e. B plays the role of S for Z
and A. Furthermore, When S needs to generates a commitment
(y, c0, c1), B does:
•
•
Cb is generated honestly as cb=Ee(rb,xb). (Recall, B knows b.)
B asks its encryption oracle to encrypt one out of (0, x1-b) and | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
(rb,xb). (Recall, B knows b.)
B asks its encryption oracle to encrypt one out of (0, x1-b) and sets the
answer C* to be c1-b.
Analysis of B:
•
If C*=E(0) then the simulated Z sees an HYB interaction.
If C*=E( x1-b) then the simulated Z sees an IDEALFcom
•
S,Z interaction.
Since Z distinguishes between the two, B breaks the semantic security of the
encryption scheme.
Dealing with adaptive adversaries
Recall the protocol (UCC) for static adversaries
• On input (sid,C,V,“commit”,b) C does:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
– Compute y= fb(x), cb=Ee(rb,x), c1-b=Ee(r1-b,0),
and send (sid,C,V,y,c0,c1) to V.
• When receiving (sid,C,V,y,c0,c1) from C, V outputs
(sid,C,“receipt”,C).
• On input (sid,“open”), C does:
– Send b,x,rb to V.
• Having received b,x,r, verifies that Fb(x)=y and cb=Ee(r,x).
If verification succeeds then output (“Open”,sid,cid,C,b).
Else output nothing.
Problem: When the committer is corrupted, it needs to present
the randomness r1-b. Now S is stuck…
Solutions:
• Erase r1-b immediately after use inside the encryption.
•
If do not trust erasures: Use an encryption where ciphertexts
are “pseudorandom”. Then the commitment protocol changes
to:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
– Let y= fb(x), cb=Ee(rb,x), c1-b=r1-b, and send (sid,C,V,y,c0,c1) to V.
Simulation changes accordingly.
Note: Secure encryption with pseudorandom ciphertexts exists
given any trapdoor permutation: Use the Goldreich-Levin
HardCore bit. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
accordingly.
Note: Secure encryption with pseudorandom ciphertexts exists
given any trapdoor permutation: Use the Goldreich-Levin
HardCore bit.
How to re-use the CRS?
Functionality Fcom handles only a single commitment.
Thus, to obtain multiple commitments one needs
multiple copies of Fcom . When replacing each copy of
Fcom with a protocol P that realizes it in the Fcrs-hybrid
model, one obtains multiple copies of P, which in turn
use multiple independent copies of Fcrs…
• Can we realize multiple copies of Fcom using a single
copy of Fcrs?
• How to formalize that?
The multi-instance commitment
functionality, Fmcom
1. Upon receiving (sid,cid,C,V,“commit”,x) from
(sid,C), do:
1. Record (cid,x)
2. Output (sid,cid,C,V, “receipt”) to (sid,V)
3. Send (sid,cid,C,V, “receipt”) to S
2. Upon receiving (sid,cid“open”) from (sid,C), do:
1. Output (sid,cid,x) to (sid,V)
2. Send (sid,cid,x) to S
How to realize Fmcom?
• Trivial solution: Run multiple copies of protocol ucc,
where each copy uses its own copy of Fcrs…
• But, can we do it with a single copy of Fcrs?
• Does protocol ucc do the job?
Attempt 1: Run as is.
Bad: Adversary can copy commitments.
Attempt 2: Include the committer’s id inside the encryption. I.e., in the
commitment phase compute cb=Ee(rb,C.x), c1-b=Ee(r1-b,C.0).
Bad: Adversary can change the encrypted id inside c0,c1.
Attempt 3: Use CCA2 (“non-malleable”) encryption.
Works…
The protocol (UCMC) for static adversaries
• On input (“commit”,V,b,sid,cid) C does:
– Choose random x,r0,r1. Obtain f0,f1, | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
input (“commit”,V,b,sid,cid) C does:
– Choose random x,r0,r1. Obtain f0,f1, e from Fcrs.
(Now e is the encryption key of a CCA2-secure encryption scheme.)
– Compute y= fb(x), cb=Ee(rb,C.x), c1-b=Ee(r1-b,C.0), and send
(sid,cid,C,V,y,c0,c1) to V.
• When receiving (sid,cid,C,V,y,c0,c1) from C, V outputs (“receipt”,C,sid,cid).
• On input (“open”,sid,cid), C does:
– Send b,x,rb to V.
• Having received b,x,rb, V verifies that Fb(x)=y and cb=Ee(rb,C.x), and that cid
never appeared before in a commitment of C.
If verification succeeds then output (“Open”,sid,cid,C,b).
Else output nothing.
Proof of security (static case)
• The simulator S is identical to that of UCC, except that
here it handles multiple commitments and
decommitments.
• Analysis of S:
– Define the same hybrid interaction HYB.
– The proof that EXECucc,A,Z ~ HYB remains essentally the
same, except that here there are many commitments
and decommitments.
– The proof that HYB ~ IDEALFmcom
is similar in
S,Z
structure to the proof for the single commitment case,
except that here the reduction is to the CCA security of
the encryption:
Simulator S:
• Choose a c.f.p. (f0, f1, f0
• Run A and give it the CRS (f0, f1, e).
• All messages between A and Z are relayed unchanged.
• Commitments by uncorrupted parties:
-1, f1
-1) and keys (e,d) for the enc. Scheme.
–
–
If S is notified by Fmcom that an uncorrupted C wishes to commit to party
V with a given cid, then simulate for A a commitment from C to V:
Choose y | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
that an uncorrupted C wishes to commit to party
V with a given cid, then simulate for A a commitment from C to V:
Choose y, compute x0=f0
and send (y, c0, c1) from C to V. When A delivers this message to V,
send “ok” to Fmcom.
If S is notified by Fmcom that C opened the commitment cid to value b,
then it simulates for A an opening message (b, xb, rb) from C to V.
-1(y), y, c0=Ee(r0,C.x0), c1=Ee(r1,C.x1),
-1(y),x1= f1
• Commitments by corrupted parties:
–
–
If A sends a commitment (cid, y, c0, c1) in the name of a corrupted committer C
to some V, then S decrypts c0. If c0 decrypts to C.x0 where x0=f0
b=0. Else b=1. Then, send (“commit”,C,V,b,sid,cid) to Fmcom.
If A sends a valid opening message (b’,x,r) for some cid (I.e., x=fb’
cb’=Ee(r,C.x)), and b’=b, then S sends (“Open”,sid,cid) to Fmcom.
If b’ != b, then S aborts the simulation
-1(y), then let
-1(y),
Analysis of S:
Let Z be an environment. define first the following hybrid interaction HYB:
Interaction HYB is identical to IDEALFmcom
S,Z, except that when S
generates commitments by uncorrupted parties, it “magically learns” the
real bit b, and then uses real (not fake) commitments. That is, the
commitment is (y, c0, c1) where c1-b=Ee(r1-b,C.0).
We proceed in two steps:
1.
Show that EXECucc,A,Z ~ HYB.
This is done by reduction to the security of the claw-free pair.
S,Z.
Show that HYB ~ IDEALFmcom
2. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
This is done by reduction to the security of the claw-free pair.
S,Z.
Show that HYB ~ IDEALFmcom
2.
This is done by reduction to the security of the encryption scheme.
Step 1: Show that EXECucc,A,Z ~ HYB:
•
•
Note that the interactions EXECucc,A,Z and HYB are identical, as long
as the adversary does not abort in an opening of a commitment made
by a corrupted party.
We show that if S aborts with probability p then we can find claws in (f0,
f1) With probability p. That is, construct the following adv. D:
–
Given (f0, f1), D simulates an interaction between Z and S (running A) when
the c.f.p. in the CRS is (f0, f1). D plays the role of S for Z and A. Since D
sees all the messages sent by Z, it knows the bits committed to be the
uncorrupted parties, and can simulate the interaction perfectly.
Furthermore, whenever S aborts then D finds a claw in (f0, f1): S
aborts if A provides a valid commitment to a bit b and then a valid
opening to 1-b. But in this case A generated a claw!
Step 2: Show that HYB ~ IDEALFmcom
S,Z:
Recall that the difference between HYB and IDEALFmcom
the commitments generated by S are real, whereas in IDEALFmcom
these commitments are fake.
S,Z is that in HYB
S,Z
Assume a env. Z that distinguishes between the two interactions. Construct a
CCA-adversary B that breaks the security of (E,D). (In fact, B will
interact in a Left-or-Right CCA interaction):
Given encryption key e, B simulates an interaction between Z and S (running
A) when the encryption key the CRS is e. B plays the role of S for Z and
A. Furthermore:
– When S needs to generates a commitment (y, c0, c1), B does:
•
•
Cb is generated honestly as cb=Ee(rb,C.xb). (Rec | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
y, c0, c1), B does:
•
•
Cb is generated honestly as cb=Ee(rb,C.xb). (Recall, B knows b.)
B asks its encryption oracle to encrypt one out of (0, C.x1-b) and sets the
answer to be c1-b.
– When A sends a commitment (y, c0, c1), B does:
•
If either c0 or c1 are test ciphertexts then they can be safely ignored,
since they contain an ID of an uncorrupted party. Else, B asks its
decryption oracle to decrypt, and continues running S.
Note:
•
•
If B’s oracle is a “Left” oracle (ie, all the test ciphertexts are encryptions
of ID.0) then the simulated Z sees an HYB interaction.
If B’s oracle is a “Right” oracle (ie, all the test ciphertexts are
encryptions of ID. x1-b) then the simulated Z sees an IDEALFmcom
interaction.
S,Z
Since Z distinguished between the two, B breaks the LR-CCA security of the
encryption scheme.
Dealing with adaptive corruptions
Use the same trick as in the single-commitment case.
Question: How to obtain CCA-secure encryption with p.r.
ciphertexts?
– Cramer-Shoup…
– Use double encryption: E(x)=E’(E”(x)), where:
• E’ is CPA-secure with p.r. ciphertext (e.g., standard
encryption based on hard-core bits of tradoor
permutations).
• E” is CCA-secure.
Note: E is not CCA-secure, but is good enough…
UC Zero-Knowledge from UC commitments
• Recall the ZKPoK ideal functionality, Fzk, and the
version with weak soundness, Fwzk.
• Recall the Blum Hamiltonicity protocol
• Show that, when cast in the Fcom-hybrid model, a
single iteration of the protocol realizes Fwzk.
(This result is unconditional, no reductions or
computational assumptions are necessary.)
• Show that can realize Fzk using k parallel copies
of | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
(This result is unconditional, no reductions or
computational assumptions are necessary.)
• Show that can realize Fzk using k parallel copies
of Fwzk.
The ZKPoK functionality Fzk (for relation H(G,h)).
1. Receive (sid, P,V,G,h) from (sid,P).
Then:
1. Output (sid, P, V, G, H(G,h)) to (sid,V)
2. Send (sid, P, V, G, H(G,h)) to S
3. Halt.
The weak ZKPoK functionality Fwzk
(for relation H(G,h)).
1. Receive (sid, P, V,G,h) from (sid,P).
Then:
1.
If P is corrupted then:
• Choose b R {0,1} and send to S.
• Obtain a bit b’ and a cycle h’ from S, and replace hh’.
If H(G,h)=1 or b’=b=1 then set v1. Else v0.
2.
3. Output (sid, P, V, G,v) to (sid,V) and to S.
4. Halt.
The Blum protocol in the Fcom-hybrid model
(“single iteration”)
Input: sid,P,V, graph G, Hamiltonian cycle h in G.
• P (cid:198) V: Choose a random permutation p on [1..n].
Let bi be the I-th bit in p(G).p. Then, for each i send to
Fcom: (sid.i,P,V,“Commit”,b ) .
i
• V (cid:198) P: When getting “receipt”, send a random bit c.
• P (cid:198) V:
If c=0 then send Fcom: (sid.i,“Open”) for all i.
If c=1 then open only commitments of edges in h.
• V accepts if all the commitment openings are received from
and in addition:
Fcom
– If c=0 then the opened graph and permutation match G
– If c=1 | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
are received from
and in addition:
Fcom
– If c=0 then the opened graph and permutation match G
– If c=1, then h is a Hamiltonian cycle.
Claim: The Blum protocol securely realizes Fwzk
H in the
Fcom–hybrid model
Proof sketch: Let A be an adversary that interacts with the
protocol. Need to construct an ideal-process adversary S that
fools all environments. There are four cases:
1. A controls the verifier (Zero-Knowledge):
S gets input z’ from Z, and runs A on input z’. Next:
–
If value from Fzk is (G,0) then hand (G,”reject”) to A.
If value from Fzk is (G,1) then simulate an interaction for V:
•
•
•
•
For all I, send (sid_i, “receipt”) to A.
If obtain the challenge c from A.
If c=0 then send openings of a random permutation of G to A
If c=1 then send an opening of a random Hamiltonian tour to A.
The simulation is perfect…
2. A controls the prover (weak extraction):
S gets input z’ from Z, and runs A on input z’. Next:
I. Obtain from A all the “commit” messages to Fcom and record the
committed graph and permutation. Send (sid,P,V,G,h=0) to Fwzk.
II. If the bit b obtained from Fwzk is 1 (i.e., Fwzk is going to allow cheating)
then send the challenge c=0 to A.
If b=0 (I.e., no cheating allowed in this run) then send c=1 to A.
III. Obtain A’s opening of the commitments in step 3 of the protocol.
If c=0, all openings are obtained and are consistent with G, then send
b’=1 to Fwzk . If c=0 and some openings are bad or inconsistent with
G then send b’=0 (I.e., no cheating, and V should not accept.)
If c=1 then obtain A’s openings of the commitments to the Hamiltonian
cycle h’. If | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
no cheating, and V should not accept.)
If c=1 then obtain A’s openings of the commitments to the Hamiltonian
cycle h’. If h’ is a Hamiltonian cycle then send h’ to Fwzk . Otherwise,
send h’=0 to Fwzk .
2. A controls the prover (weak extraction):
Analysis of S:
The simulation is perfect. That is, the joint view of the
simulated A together with Z is identical to their view in
an execution in the Fcom –hybrid model:
– V’s challenge c is uniformly distributed.
–
If c=0 then V’s output is 1 iff A opened all commitments
and the permutation is consistent with G.
If c=1 then V’s output is 1 iff A opened a real Hamiltonian
cycle in G.
–
3. A controls neither party or both parties: Straightforward.
From Fwzk
R to Fzk
R
A protocol for realizing Fzk
• P(x,w): Run k copies of Fwzk
R in the Fwzk
R -hybrid model:
R , in parallel. Send
(x,w) to each copy.
• V: Run k copies of Fwzk
R , in parallel. Receive (x ,b )
i
i
from the i-th copy. Then:
– If all x’s are the same and all b’s are the same then
output (x,b).
– Else output nothing
Analysis of the protocol
Let A be an adversary that interacts with the protocol in the Fwzk
R -hybrid
R and fools all environments. There are four cases:
model. Need to construct an ideal-process adversary S that interacts
with Fzk
A controls the verifier: In this case, all A sees is the value (x,b) coming
in k times, where (x,b) is the output value. This is easy to simulate:
S obtains (x,b) from TP, gives it to A k times, and outputs whatever A
outputs.
A controls the prover: Here, A should provide k inputs x1 …xk to the k
R , and
copies of Fwzk | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
controls the prover: Here, A should provide k inputs x1 …xk to the k
R , and
copies of Fwzk
should give witnesses w1 …wk in return. S runs A, obtains x1 …xk,
gives it k random bits b1 …bk, and obtains w1 …wk. Then:
R , obtain k bits b1 …bk from these copies of Fwzk
If all the x’s are the same and all copies of Fwzk
R would accept, then
R . (If didn’t find
find a wi such that R(x,w )=1, and give (x,w ) to Fzk
i
such wi then fail. But this will happen only if b1 …bk are all 1, and
this occurs with probability 2-k. )
Else give (x,w’) to to Fzk
R , where w’ is an invalid witness.
i
1.
2.
–
–
Analysis of S:
– When the verifier is corrupted, the views of Z from both
interactions are identically distributed.
– When the prover is corrupted, conditioned on the event
that S does not fail, the views of Z from both interactions
are identically distributed. Furthermore, S fails only if b1
…bk are all 1, and this occurs with probability 2-k.
Note: The analysis is almost identical to the non-concurrent
case, except that here the composition is in parallel. | https://ocw.mit.edu/courses/6-897-selected-topics-in-cryptography-spring-2004/054e4a5552a39239b46a1e5a5c09cf13_lecture9_10.pdf |
Welcome
back
to 8.033!
Emmy Noether
1882-1935
Image courtesy of Wikipedia.
MIT Course 8.033, Fall 2006, Lecture 2
Max Tegmark
PRACTICAL STUFF:
• PS1 due Friday 4PM
(cid:129) Symmetry notes posted
TODAY’S TOPIC: SYMMETRY IN PHYSICS
(cid:129) Key concepts: frame, inertial frame, transformation, invariant,
invariance, symmetry, relativity
(cid:129) Key people: Galileo Galileo, Emmy Noether
(cid:129) Symmetry examples: translation, rotation, parity, boost
(cid:129) Million Dollar question: what are the symmetries of physics?
What do we mean by symmetry?
WHAT’S THE
SYMMETRY OF
THE UNIVERSE?
OF PHYSICS?
?
Figures by MIT OCW.
?
Figures by MIT OCW.
?
Figures by MIT OCW.
WHAT’S THE
SYMMETRY
OF
CLASSICAL
MECHANICS?
Figures by MIT OCW.
?
?
Figures by MIT OCW.
?
Figures by MIT OCW.
SO WHICH DO YOU
TRUST MORE:
Classical Mechanics, or
E&M? | https://ocw.mit.edu/courses/8-033-relativity-fall-2006/055852a54ae99cb8b6a6386f84e71bef_lecture2_symme1.pdf |
ESD.86
Markov Processes and their
Application to Queueing
Richard C. Larson
March 5, 2007
Photo courtesy of Johnathan Boeke. http://www.flickr.com/photos/boeke/134030512/
Outline
(cid:139) Spatial Poisson Processes, one more time
(cid:139) Introduction to Queueing Systems
(cid:139) Little’s Law
(cid:139) Markov Processes
Spatial
Poisson
Processes
Courtesy of Andy Long. Used with permission.
http://zappa.nku.edu/~longa/geomed/modules/ss1/lec/poisson.gif
Spatial Poisson Processes
(cid:139) Entities distributed in space (Examples?)
(cid:139) Follow postulates of the (time) Poisson
process
– λdt = Probability of a Poisson event in dt
– History not relevant
– What happens in disjoint time intervals is
independent, one from the other
– The probability of a two or more Possion events in
dt is second order in dt and can be ignored
(cid:139) Let’s fill in the spatial analogue…..
Set S has area A(S).
Poisson intensity is γ
Poisson entities/(unit area).
X(S) is a random variable
X(S) = number of Poisson
entities in S
S
P{X(S) = k} =
(γA(S))k
k!
e−γA (S ), k = 0,1,2,...
Nearest Neighbors: Euclidean
Define D2= distance from a random point
to nearest Poisson entity
Want to derive fD2
Happiness:
(r).
FD2
FD2
FD2
(r) ≡ P{D2 ≤ r} = 1− P{D2 > r}
(r) = 1− Pr ob{no Poisson entities in circle of radius r}
r
(r) = 1− e−γπr 2
r ≥ 0
f D2
(r) =
d
dr
FD2
(r) = 2rγπe−γπr 2
r ≥ | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
2
(r) =
d
dr
FD2
(r) = 2rγπe−γπr 2
r ≥ 0
Rayleigh pdf with parameter 2γπ
Random
Point
Nearest Neighbors: Euclidean
Define D2= distance from a random point
to nearest Poisson entity
Want to derive fD2
1
γ
E[D2] = (1/2)
(r).
"Square Root Law"
2 = (2 −π/2)
σD2
1
2πγ
f D2
(r) =
d
dr
FD2
(r) = 2rγπe−γπr 2
r ≥ 0
Rayleigh pdf with parameter 2γπ
r
Random
Point
Nearest Neighbor: Taxi Metric
FD1
FD1
FD1
(r) ≡ P{D1 ≤ r}
(r) = 1− Pr{no Poisson entities in diamond}
(r) = 1− e−γ2r 2
f D1
(r) =
d
dr
FD1
(r) = 4rγe−2γr 2
r
How Might you Derive the PDF
fo the kth Nearest Neighbor?
Blackboard exercise!
To Queue or Not to Queue,
That May be a Question!
Queueing System
Arriving Customers
Service
Facility
Queue of Waiting Customers
Departing Customers
Figure by MIT OCW.
Servers:
Statistical Clones?
Finite or
Infinite?
Finite or
Infinite?
Queue
Discipline:
How queuers
Are selected
for service
Source: Larson and Odoni, Urban Operations Research
What Kinds of Queues Occur in
Systems of Interest to ESD?
ESD
Queues?
Photos courtesy, from top left, clockwise: U.S. FAA: Flickr user “*keng” http://www.flickr.com/photos/kengz/67187556/;
Luke Hoersten http://www.flickr.com/photos/lukehoersten/532375235/)
Little’s Law for Queues
a(t)
L(t)
d(t)
Source | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
ersten/532375235/)
Little’s Law for Queues
a(t)
L(t)
d(t)
Source: Larson and Odoni, Urban Operations Research
a(t) = cumulative # arrivals to system in (0,t]
d(t) = cumulative # departures from system in (0,t]
L(t) = a(t) − d(t)
L(t) = number of customers in the system
(in queue and in service) at time t
Little’s Law for Queues
t∫
γ(t) =
[a(τ) − d(τ)]dτ
t∫
γ(t) = total number of customer minutes spent in the system
L(τ)dτ
=
0
0
a(t) = cumulative # arrivals to system in (0,t]
d(t) = cumulative # departures from system in (0,t]
L(t) = a(t) − d(t)
L(t) = number of customers in the system
(in queue and in service) at time t
Let’s Get an expression for Each of 3 Quantities
λt ≡ average customer arrival rate = a(t) /t
W t ≡ average time that an arrived customer has spent in the system
W t = γ(t) /a(t)
Lt = time average # customers in system during (0,t]
Lt =
1
t
t∫
0
L(τ)dτ= γ(t) /t
= λtW t
=
Lt =
a(t)
t
γ(t)
a(t)
γ(t)
t
In the limit,
L = λW , Little's Law
Key Issues
L = λW
(cid:139) L in a time-average. Explain
(cid:139)λis average of arrival rate of customers
who actually enter the system
(cid:139) W is average time in system (in queue
and in service) for actual customers
who enter the system
More Issues
L = λW
(cid:139) Little’s Law is general. It does not
depend on
– Arrival process
– Service process
– # servers
– Queue discipline
– Renewal assumptions, etc.
(cid:139) It just | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
– Arrival process
– Service process
– # servers
– Queue discipline
– Renewal assumptions, etc.
(cid:139) It just requires that the 3 limits exist.
Still More
Issues
L = λW
(cid:139) What about balking? Reneging? Finite
capacity?
(cid:139) Do we need iid service times? Iid inter-
arrival times?
(cid:139) Do we need each busy period to behave
statistically identically?
(cid:139) Look at role of γ(t). Can change queue
statistics by changing queue discipline.
Cumulative # of Arrivals
FCFS=First Come, First Served
SJF=Shortest Job First
FCFS
SJF
L(t)
LSJF(t)
0
What about LJF,
Longest Job 1st?
t = time
“System” is
General
L = λW
(cid:139) Our results apply to entire queue
system, queue plus service facility
(cid:139) But they could apply to queue only!
S.F.
Lq = λW q
(cid:139) Or to service facility only!
LSF = λW SF = λ/μ
1/μ= mean service time
All of this means,
“You buy one, you get the other 3 for free!”
W =
1
μ
+W q
L = Lq + LSF = Lq +
λ
μ
L = λW
Utilization Factor ρ
(cid:139) Single Server. Set
Y ={1 if server is busy
0 if server is idle
E[Y] = 1* P{server is busy} + 0 * P{server is idle}
E[Y] = 1*ρ+ 0 = ρ= E[# customers in SF] = ?
(cid:139) E[Y] is time-average number of
customers in the SF
(cid:139) Buy Little’s Law,
ρ= λ/μ < 1
Utilization Factor ρ
(cid:139) Similar logic for N identical parallel
servers gives
ρ= (
λ
)
N
1
μ
=
λ
Nμ | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
:139) Similar logic for N identical parallel
servers gives
ρ= (
λ
)
N
1
μ
=
λ
Nμ
< 1
(cid:139) Here, λ/μcorresponds to the time-
average number of servers busy
Markov Queues
Markov here means, “No Memory”
Source: Larson and Odoni, Urban Operations Research
Balance of Flow Equations
λ0P0 = μ1P1
(λn + μn )Pn = λn−1Pn−1 + μn +1Pn +1 for n = 1,2,3,...
To be continued………….. | https://ocw.mit.edu/courses/esd-86-models-data-and-inference-for-socio-technical-systems-spring-2007/0565f132e474fba29bbf61e88252fa78_lec8.pdf |
3.37 (Class 7)
Review
• Book on explosive welds
• Wire Bonding
o Diagram on board (squeeze wire-thermal compression weld)
o Perimeter Bonding
o Up to 200-400 I/O (approx 50 per side, can make double rows but lose
real-estate on semiconductor)
• TAB Bonding
o Diagram on board
o Solder connection (gold and tin-plated connection, use a heated platen of
tungsten or aluminum, perhaps molybdenum)
o Perimeter Bonding
o Up to 400-800 I/O (approx 100 per side)
• Controlled Collapse Chip Connection (C4)
o Diagram on board (internal connection of silicon chip to substrate with
small solder balls, tin or indium-tin, sometimes around lead which doesn’t
melt so that chip stays above substrate, about 4mil)
o Balls help to self align by surface tension
o Ball Grid Array, sometimes refers to the connection of package to circuit
board, distinction between C4 and BGA becomes blurry when chip gets
mounted directly to the board.
o Don’t get much bigger than 1cm per side on chip due to thermal expansion
o Invented in 1960’s by IBM, just come into its own
o 1000-2000 I/O
o Area Bonding (lose some real-estate, but distributed throughout)
• Size of chips and speed
o Speed of light = 3x10^8 m/s
o Lambda = c*t
o Say 3GHz
o Lambda = 0.01um = 10nm = 1000angstrom
o Even if electrons going at speed of light, signal can’t even get off the chip
o But only going at about 10% of the speed of light, has to do with the
reactance (mostly capacitance) of the chip.
o Alpha chip at 1GHz required special electromagnetic design
o One of the ways around this is to use clockless computers, supposedly
able to get a | https://ocw.mit.edu/courses/3-37-welding-and-joining-processes-fall-2002/058777e333c43dcdd2ab4faec95745b6_33707.pdf |
Alpha chip at 1GHz required special electromagnetic design
o One of the ways around this is to use clockless computers, supposedly
able to get a 2-3x increase in speed.
Today
Adhesive Bonding
• Unique among welding and joining processes
• Only one that just buries the contamination+
• For copy machines, essentially bonding toner to paper
o Company X, book starts with “gluons”
o Similar to starting with diagram of binding energy between atoms
o But don’t need to start at this level in adhesive bonding
• Start with surface, then add contaminants
o Oxides (very quickly forms)
o CO2
o H20
• Need something that has a lower surface energy
o Problem bonding to Teflon
• Types
o Type I
(cid:131) Diagram on board
(cid:131) Two pieces of solid, interpose a liquid that “wets” the surface
(cid:131) Separation distance d, radius of curvature r
(cid:131) See formula on board
(cid:131) Soap bubbles obey this equation
(cid:131) Blowing up balloons, need more pressure to start the balloon, small
curvature
(cid:131) Adjust for spherical or cylindrical bubbles
(cid:131) Negative pressure in wetting liquid
(cid:131) Demonstration with Johansen blocks (used by machinists), very
precisely ground blocks, accurate to about 50millionths of an inch,
slide together and they adhere, use oils from hands as the liquid,
even chalk dust can interfere with this bond
o Type II, Mechanical Interlocking
(cid:131) Two rough surfaces with liquid that hardens yields a mechanical
bond
(cid:131) Say 90+% of bonds are just mechanical interlocking
(cid:131) Teflon mechanically interlocked into a porous surface
(cid:131) Demonstration with “Magic Sand”
Wetting
• Young’s equation
o Diagram on board
o Angle formed at solid-liquid-vapor interface
o Used when get to solders
o | https://ocw.mit.edu/courses/3-37-welding-and-joining-processes-fall-2002/058777e333c43dcdd2ab4faec95745b6_33707.pdf |
Young’s equation
o Diagram on board
o Angle formed at solid-liquid-vapor interface
o Used when get to solders
o One of the simplest and most misunderstood equations
o An equation at equilibrium
o Water on waxed car has high theta
o Adding wetting agents in automatic dishwashers to cause “sheeting
action”
o Mercury (toxic), can use gallium (not as toxic as mercury, melts in your
hand) | https://ocw.mit.edu/courses/3-37-welding-and-joining-processes-fall-2002/058777e333c43dcdd2ab4faec95745b6_33707.pdf |
MIT 3.016 Fall 2005
c
� W.C Carter
Lecture 5
25
Sept. 16 2005: Lecture 5:
Introduction to Mathematica IV
Graphics
Graphics are an important part of exploring mathematics and conveying its results. An infor
mative plot or graphic that conveys a complex idea succinctly and naturally to an educated
observer is a work of creative art. Indeed, art is sometimes defined as “an elevated means of
communication,” or “the means to inspire an observation, heretofore unnoticed, in another.”
Graphics are art; they are necessary. And, I think they are fun.
For graphics, we are limited to two and threedimensions, but, with the added possibility
of animation, sound, and perhaps other sensory input in advanced environments, it is possible
to usefully visualize more than three dimensions. Mathematics is not limited to a small number
of dimensions; so, a challenge —or perhaps an opportunity—exists to uses artfulness to convey
higher dimensional ideas graphically.
Basic graphics starts with twodimensional plots.
Mathematica r� Example: Lecture05
Twodimensional Plots
2D plots, plot options, log plots
Plotting Data
Sometimes you will want to plot number that come from elsewhere—otherwise known
as data. Presumably, data will be imported with file I/O. It is useful to plot data within
Mathematica r� so you can compare it to model equations or to fit to an empirical equation.
Threedimensional graphics are typically projected onto the screen. This means that you
need to specify the direction in space from which you will look at the twodimensional | https://ocw.mit.edu/courses/3-016-mathematics-for-materials-scientists-and-engineers-fall-2005/05a4a32191bb4a9ae8214d6a731b97e6_lecture_05.pdf |
need to specify the direction in space from which you will look at the twodimensional pro
jection. You get some depth information in a projection by the perspective (i.e, the trick
that artists use of making parallel lines converge at a noninfinite point. (e.g. 15th cen
tury Italian School, Donetello)). You also get information by changing your viewpoint. In
Mathematica r� you need to specify a ViewPoint that orients the viewer from a certain di
rection and sets the perspective. At a close viewpoint (i.e., magnitude of the ViewPoint vector
MIT 3.016 Fall 2005
c
� W.C Carter
Lecture 5
26
is small), parallel lines converge quickly and perspective–as well as distortion–is enhanced.
For more distant ViewPoints, an object projects more ”flatly” (as in Art Naif) and with less
distortion.
Mathematica r� Example: Lecture05
Three Dimensional Graphics
Plotting three dimensional graphics
Mathematica r� has a “graphical engine” that allows you to add additional graphics to
your plot. Although, it is not efficient, one could use Mathematica r� as a drawing program
like Pourripinte or similar. Mathematica r� has a number of graphics primitives that can
be drawn—it is only a question of asking Mathematica r� to draw a primitive where you
want it.
Mathematica r� Example:
Lecture05
Graphics Primitives
Examples: Circles, Text, Random Walk, Wulff Construction
Because PostScript is one of the graphics primitives, you can draw anything that | https://ocw.mit.edu/courses/3-016-mathematics-for-materials-scientists-and-engineers-fall-2005/05a4a32191bb4a9ae8214d6a731b97e6_lecture_05.pdf |
ulff Construction
Because PostScript is one of the graphics primitives, you can draw anything that can
be imaged in another application. You can also import your own drawing and images into
Mathematica r
� . | https://ocw.mit.edu/courses/3-016-mathematics-for-materials-scientists-and-engineers-fall-2005/05a4a32191bb4a9ae8214d6a731b97e6_lecture_05.pdf |
Electricity and Magnetism
• More on
– Electric Flux
– Gauss’ Law
Feb 20 2002
More on Electric Flux and
Gauss’ Law
Maxwell Equations
(1873)
Feb 20 2002
Electric Flux
Note absence of ‘
‘
Electric Flux ΦE = E A
‘ΦE’ is a Scalar: How much?
I.e. how much field passes through surface A?
Feb 20 2002
A?
• Direction
– Normal to surface
• Magnitude
– Surface Area
• For closed surface
– Pointing outwards
Feb 20 2002
Electric Flux
• What if E not constant on surface A?
• Use integral
• Often, ‘closed’ surfaces
Feb 20 2002
Gauss’ Law
• Connects Flux through closed surface and
charge inside this surface:
Note: k = 1/ 4 π ε0
Feb 20 2002
Gauss’ Law
• True for ANY closed surface around Qencl
• Suitable choice of surface A can make integral
very simple
Feb 20 2002
Use the Symmetry!
+Q
dA
+Q
Point Charge
Spherical Surface
Feb 20 2002
Use the Symmetry!
+
+
++
+ +
+
+
+
r0
+
++
+ +
+
+
+
+
+
+
+
+
A1
+
+
++
+ +
+
+
+
+
+
r0
+
++
+ +
+
A2
+
+
+
+
+
Charged Sphere
Spherical Surfaces
Feb 20 2002
Use the Symmetry!
A1
+
+
++
+ +
+
+
+
+
+
r0
+
++
+ +
+
A2
+
+
+
+
+
Spherical Surfaces
Feb 20 2002
Use the Symmetry!
+
+++++
+
+
+
++
+
+ +
+
+
Charged Line
Cylindrical Surface
Feb 20 2002
Hollow conducting Sphere
+
+
+
+
+
+
+
+
+
+
+
+ | https://ocw.mit.edu/courses/8-02x-physics-ii-electricity-magnetism-with-an-experimental-focus-spring-2005/06015ba0c4ce73e69f14f44a1714cfcd_2_20_2002_edited.pdf |
ylindrical Surface
Feb 20 2002
Hollow conducting Sphere
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Feb 20 2002
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Last example
+ + + + + + +
+ + + + + + +
Feb 20 2002
Faraday Cage
Hollow Metal Sphere
+
+
+
+
+
+
+
+
+
+
- -
- --
HV
Feb 20 2002
Van der Graaf Generator
Figure by MIT OCW.
Faraday Cage
Hollow Metal Sphere
+
++
+
+
+++
+
Large E; E~1/r2
Feb 20 2002
Van der Graaf Generator
Figure by MIT OCW.
‘Challenge’ In-Class Demo
Feb 20 2002 | https://ocw.mit.edu/courses/8-02x-physics-ii-electricity-magnetism-with-an-experimental-focus-spring-2005/06015ba0c4ce73e69f14f44a1714cfcd_2_20_2002_edited.pdf |
Lecture 2
8.251 Spring 2007
Lecture 2 - Topics
• Energy and momentum
• Compact dimensions, orbifolds
• Quantum mechanics and the square well
Reading: Zwiebach, Sections: 2.4 - 2.9
1
x± = √
2
(x 0 ± x 1)
x+ l.c. time
Leave x2 and x3 untouched.
−ds2 = −(dx0)2 + (dx1)2 + (dx2)2 + (dx3)2
= ηµvdxµdxv
u, v = 0, 1, 2, 3
2dx+dx− = (dx0 + dx1)(dx0 − dx1)
= (dx0)2 − (dx1)2
−ds2 = −2dx+dx− + (dx2)2 + (dx3)2
= ηˆµvdxµdxv
u, v = +, −, 2, 3
⎡
ηˆµν = ⎢
⎢
⎣
0 −1 0
0
0
−1
1
0
0
0
0
0
⎤
⎥
⎥
⎦
0
0
0
1
1
Lecture 2
8.251 Spring 2007
ηˆ++ = ηˆ−− = ηˆ+I = ηˆ = −I
I = 2, 3
ηˆ+− = ηˆ−+ = −1
η22 = η33 = 1
Given vector aµ, transform to:
1
a± := √
2
(a 0 ± a 1)
Einstein’s equations in 3 space-time dimensions are great. But 2 dimensional
space is not enough for life. Luckily, it works also in 4 dimensions (d5, d6, ...).
Why don’t we live with 4 space dimensions?
If | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
. Luckily, it works also in 4 dimensions (d5, d6, ...).
Why don’t we live with 4 space dimensions?
If we lived with 4 space dimesnions, planetary orbits wouldn’t be stable (which
would be a problem!)
Maybe there’s an extra dimension where we can unify gravity and ...
Maybe if so, then the extra dimensions would have to be very small – too small
to see.
String theory has extra dimensions and makes theory work. Though caution:
this is a pretty big leap.
Trees in a Box
Look at trees in a box
Move a little and see another behind it
2
Lecture 2
8.251 Spring 2007
In fact, see ∞ row that are all identical! Leaves fall identically and everything.
Dot Product
a b = −a◦b◦ +
·
a ibi
3
�
i=1
= −a +b− − a−b+ + a 2b2 + a 3b3
= ηˆµν aµbν
aµ = ηˆµν a ν
a+ = ηˆ+ν a ν = ηˆ+−a− = −a−
a+ = −a−
a− = −a +
Light rays a bit like in Galilean physics - go from 0 to ∞.
vlc =
dx−
dx+
3
Lecture 2
8.251 Spring 2007
Energy and Momentum
µ
Event 1 at x
Event 2 at xµ + dxµ (after some positive time change)
dxµ is a Lorentz vector
The dimension along the room, row is actually a circle with one tree, so not
actually infinity.
See light rayws that goes around circle multiple times to see multiple trees.
Crazy way to define a circle
This circle is a topological circle - no “center”, no “radius”
Identify two points, P1 and P2 | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
ne a circle
This circle is a topological circle - no “center”, no “radius”
Identify two points, P1 and P2. Say the same (P1 ≈ P2) if and only if x(P1) =
x(P2) + (2πR)n (n ∈ Z)
Write as:
x ≈ x + (2πR)n
Define: Fundamental Domain = a region sit.
1. No two points in it are identified
2. Every point in the full space is either in the fundamental domain or has a
representation in the fundamental domain.
So on our x line, we would have:
4
Lecture 2
8.251 Spring 2007
−ds2 = −c 2dt2 + (d�x)2
= −c 2dt2 + v 2(dt)2
= −c 2(1 − β2)(dt)2
ds2 is a positive value so can take square root:
�
ds = 1 − β2dt
In to co-moving Lorentz frame, do same computation and find:
−ds2 = −c 2(dtp)2 + (d�x)2 = −c 2(dtp)2
dtp: Proper time moving with particle. Also greater than 0.
Define velocity u-vector:
ds = cdtp
dxµ
ds
= Lorentz Vector
uµ =
cdcxµ
dx
Definite momentum u-vector:
pµ = muµ = �
dxµ
m
1 − β2 dt
= mγ
dxµ
dt
γ = �
1
1 − β2
Rule to get the space we’re trying to construct:
Take the f d, include its boundary, and apply the identification
·
5
Lecture 2
8.251 Spring 2007
Note: Easy to get mixed up if rule not followed carefully.
Consider �2 with 2 identifications: | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
07
Note: Easy to get mixed up if rule not followed carefully.
Consider �2 with 2 identifications:
(x, y) ≈ (x + L1, y)
(x, y) ≈ (x, y + L2)
Blue: Fundamental domain for first identification
Red: Fundamental domain for second identification
6
Lecture 2
8.251 Spring 2007
�
�
pµ = mγ
dx0 d�x
,
dt dt
= (mcγ, mγ�v)
�
�
E
c
, �p
=
E: relativistic energy = µc√
2
1−β2
�
p: relativistic momentum
Scalar:
µp
= −
+ �p 2
pµ = (p 0)2 + (p�)2
·
E2
c2
m c
1 − β2
�
= −
2 2
+
= −m
2 2 1 − β2
c
1 − β2
2 2
m v
1 − β2
�
= −m
2 2 c
Every observer agrees on this value.
Light Lone Energy
x0 = time, E
c = p0
+x = time, E
lc = p+? –¿ Nope!
c
Justify using QM: Ψ(t, �x) = e
−i (Et−p�0�x)
h
Can think of the IDs as transformations - points “move.” Here’s something that
“moves” some points but not all.
Orbfolds
1.
ID: x ≈ −x
FD:
7
Lecture 2
8.251 Spring 2007
Think of ID as transformation x → −x
This FD not a normal 1D manifold since origin is fixed. Call this half time �/Zz
the quotient.
2.
ID: x ≈ x rotated about origin by 2π/n | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
Call this half time �/Zz
the quotient.
2.
ID: x ≈ x rotated about origin by 2π/n
In polar coordinates:
z = x + iy
� �
2πi z
n
z ≈ e
Fundamental domain can be chosen to be:
8
Lecture 2
8.251 Spring 2007
Cone!
We focus on these two since quite solvable in string theory.
SE:
p�ˆ = h�/i
ih
∂Ψ E
= Ψ
∂x0
c
ih ∂
c ∂t
Ψ = EΨ
So for our x+, want ih ∂Ψ
∂x+ = ElccΨ
�
Et − �p �x = − − ct + p� �x
·
�
·
E
c
= −p · x
= −(p+x + p−x− + . . .)
Now have isolated dependence on x+, so can take derivative:
So:
Ψ = e
+i
h
(p+x + + . . .)
ih
∂Ψ
∂x+
= −p+Ψ
Elc − p+ = p−
=
Suppose have line segment of length a. Particle constrained to this:
9
Lecture 2
8.251 Spring 2007
Compare to physics of world with particle constrained to thin cylinder of radius
R and length a (2D)
Can be defined as:
with ID (x, y) ≈ (x, y + 2πR)
So:
1.
2.
SE =
�
−h2
∂2
2m ∂x2
+
∂2
∂y2
�
= EΨ
Ψk =
�
���� sin
kπx
a
�
Ek =
� �2
h2 kπ
2m a
Ψ� k,l =
Ψk,l =
�
���� sin
�
���� sin
kπx
a
kπx
a
� | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
=
Ψk,l =
�
���� sin
�
���� sin
kπx
a
kπx
a
� � �
cos
� � �
sin
ly
R
ly
R
If states with l = 0 then get same states as case 1, but if l = 0 get different E
� �2
l
value from R
contribution. Only noticeable at very high temperatures.
10
� | https://ocw.mit.edu/courses/8-251-string-theory-for-undergraduates-spring-2007/061e1f35aad94b88fee585e928db8f61_lec2.pdf |
6.852: Distributed Algorithms
Fall, 2009
Class 6
Today’s plan
f+1-round lower bound for stopping agreement, cont’d.
•
• Various other kinds of consensus problems in synchronous
networks:
– k-agreement
– Approximate agreement (skip)
– Distributed commit
• Reading:
– [Aguilera, Toueg]
– [Keidar, Rajsbaum]
– Chapter 7 (skip 7.2)
• Next:
– Modeling asynchronous systems
– Chapter 8
Lower Bound on Rounds
• Theorem 1: Suppose n ≥ f + 2. There is no n-process f-
fault stopping agreement algorithm in which nonfaulty
processes always decide at the end of round f.
• Old proof: Suppose A exists.
– Construct a chain of executions, each with at most f failures, where:
• First has decision value 0, last has decision value 1.
• Any two consecutive executions are indistinguishable to some process i
that is nonfaulty in both.
– So decisions in first and last executions are the same, contradiction.
– Must fail f processes in some executions in the chain, in order to
remove all the required messages, at all rounds.
– Construction in book, LTTR.
• Newer proof [Aguilera, Toueg]:
– Uses ideas from [Fischer, Lynch, Paterson], impossibility of
asynchronous consensus.
[Aguilera, Toueg] proof
• By contradiction. Assume A solves stopping agreement
for f failures and everyone decides after exactly f rounds.
• Consider only executions in which at most one process
fails during each round.
• Recall failure at a round allows process to miss sending
any subset of the messages, or to send all but halt
before changing state.
• Regard vector of initial values as a 0-round execution.
• Defs (adapted from [FLP]): α, an execution that
completes some finite number (possibly 0) of rounds, is:
– 0-valent, if 0 is the only decision that can occur in any execution
(of the kind we consider) that extends α.
– 1-valent, if 1 is…
– Univalent, if α is either 0 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
(of the kind we consider) that extends α.
– 1-valent, if 1 is…
– Univalent, if α is either 0-valent or 1-valent (essentially decided).
– Bivalent, if both decisions occur in some extensions (undecided).
Univalence and Bivalence
α
α
α
0
0
0
1
1
1
0
1
1
0-valent
1-valent
bivalent
α univalent
Initial bivalence
• Lemma 1: There is some 0-round execution
(vector of initial values) that is bivalent.
• Proof (from [FLP]):
– Assume for contradiction that all 0-round executions
are univalent.
– 000…0 is 0-valent.
– 111…1 is 1-valent.
– So there must be two 0-round executions that differ in
the value of just one process, i, such that one is 0-
valent and the other is 1-valent.
– But this is impossible, because if i fails at the start, no
one else can distinguish the two 0-round executions.
Bivalence through f-1 rounds
• Lemma 2: For every k, 0 ≤ k ≤ f-1, there is a bivalent k-
round execution.
• Proof: By induction on k.
– Base: Lemma 1.
– Inductive step: Assume for k, show for k+1, where k < f -1.
• Assume bivalent k-round execution α.
• Assume for contradiction that every 1-round
extension of α (with at most one new failure)
is univalent.
• Let α* be the 1-round extension of α in
which no new failures occur in round k+1.
• By assumption, α * is univalent, WLOG 1-
valent.
α
α*
α0
round k+1
• Since α is bivalent, there must be another 1-
round extension of α, α0, that is 0-valent.
1-valent
0-valent
Bivalence through f-1 rounds
•
In α0, some single process, say i, fails in
round k+1, by not sending to some set | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
through f-1 rounds
•
In α0, some single process, say i, fails in
round k+1, by not sending to some set of
processes, say J = {j1, j2,…jm}.
• Define a chain of (k+1)-round executions,
α0, α1, α2,…, αm.
• Each αl in this sequence is the same as α0
except that i also sends messages to j1,
j2,…jl.
– Adding in messages from i, one at a time.
α
α*
α0
round k+1
1-valent
0-valent
• Each αl is univalent, by assumption.
• Since α0 is 0-valent, either:
– At least one of these is 1-valent, or
– All are 0-valent.
Case 1: At least one αl is 1-valent
• Then there must be some l such that αl-1 is 0-
valent and αl is 1-valent.
• But αl-1 and αl differ after round k+1 only in the
state of one process, jl.
• We can extend both αl-1 and αl by simply failing jl
at beginning of round k+2.
– There is actually a round k+2 because we’ve
assumed k < f-1, so k+2 ≤ f.
• And no one left alive can tell the difference!
• Contradiction for Case 1.
Case 2: Every αl is 0-valent
• Then compare:
– αm, in which i sends all its round k+1 messages and
then fails, with
– α* , in which i sends all its round k+1 messages and
does not fail.
• No other differences, since only i fails at round k+1
in αm.
• αm is 0-valent and α* is 1-valent.
• Extend to full f-round executions:
– αm, by allowing no further failures,
– α*, by failing i right after round k+1 and then allowing no
further failures.
• No one can tell the difference.
• Contradiction for Case 2.
Bivalence through f-1 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
then allowing no
further failures.
• No one can tell the difference.
• Contradiction for Case 2.
Bivalence through f-1 rounds
• So we’ve proved, so far:
• Lemma 2: For every k, 0 ≤ k ≤ f-1, there is
a bivalent k-round execution.
Disagreement after f rounds
• Lemma 3: There is an f-round execution in which two
nonfaulty processes decide differently.
• Proof:
– Use Lemma 2 to get a bivalent (f-1)-round execution α
with ≤ f-1 failures.
– In every 1-round extension of α, everyone who hasn’t
failed must decide (and agree).
– Let α* be the 1-round extension of α in which no new
failures occur in round f.
α
– Everyone who is still alive decides after α*, and they
must decide the same thing. WLOG, say they decide 1.
α*
α0
– Since α is bivalent, there must be another 1-round
extension of α, say α0, in which some nonfaulty process
(and so, all nonfaulty processes) decide 0.
round f
decide 1
decide 0
Disagreement after f rounds
In α0, some single process i fails in round f.
•
• Let j, k be two nonfaulty processes.
• Define a chain of three f-round executions, α0, α1, α*,
where α1 is identical to α0 except that i sends to j in α1
(it might not in α0).
α
• Then α1 ~k α0.
• Since k decides 0 in α0, k also decides 0 in α1.
• Also, α1 ~j α*.
• Since j decides 1 in α*, j also decides 1 in α1.
• Yields disagreement in α1, contradiction!
α*
α0
round f
decide 1
decide 0
• So we’ve proved:
• Lemma 3: There is an f-round execution in which two nonfaulty
processes decide differently.
• Which immediately yields the lower bound result.
Early-stopping agreement algorithms
• Tolerate f failures | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
two nonfaulty
processes decide differently.
• Which immediately yields the lower bound result.
Early-stopping agreement algorithms
• Tolerate f failures in general, but in executions with f′ < f
•
•
failures, terminate faster.
[Dolev, Reischuk, Strong 90] Stopping agreement
algorithm in which all nonfaulty processes terminate in ≤
min(f′ + 2, f+1) rounds.
– If f′ + 2 ≤ f, decide “early”, within f′ + 2 rounds; in any case decide
within f+1 rounds.
[Keidar, Rajsbaum 02] Lower bound of f′ + 2 for early-
stopping agreement.
– Not just f′ + 1. Early stopping requires an extra round.
• Theorem 2: Assume 0 ≤ f′ ≤ f – 2 and f < n. Every early-
stopping agreement algorithm tolerating f failures has an
execution with f′ failures in which some nonfaulty process
doesn’t decide by the end of round f′ + 1.
Just consider special case: f′ = 0
• Theorem 3: Assume 2 ≤ f < n. Every early-stopping
agreement algorithm tolerating f failures has a failure-free
execution in which some nonfaulty process does not decide
by the end of round 1.
• Definition: Let α be an execution that completes some
finite number (possibly 0) of rounds. Then val(α) is the
unique decision value in the extension of α with no new
failures.
• Proof of Theorem 3:
– Assume executions in which at most one process fails per round.
– Identify 0-round executions with vectors of initial values.
– Assume, for contradiction, that everyone decides by round 1, in all
failure-free executions.
– val(000…0) = 0, val(111…1) = 1.
– So there must be two 0-round executions α0 and α1, that differ in the
value of just one process i, such that val(α0) = 0 and val(α1) = 1.
Special case: f′ = 0
• 0-round executions α | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
α0) = 0 and val(α1) = 1.
Special case: f′ = 0
• 0-round executions α0 and α1, differing only in the initial value of
process i, such that val(α0) = 0 and val(α1) = 1.
In failure-free extensions of α0, α1, all processes decide in one round.
•
• Define:
– β0, 1-round extension of α0, in which process i fails, sends only to j.
– β1, 1-round extension of α1, in which process i fails, sends only to j.
• Then:
– β0 looks to j like ff extension of α0, so j decides 0 in β0 after 1 round.
– β1 looks to j like ff extension of α1, so j decides 1 in β1 after 1 round.
• β0 and β1 are indistinguishable to all processes except i, j.
• Define:
– γ 0, infinite extension of β0, in which process j fails right after round 1.
– γ 1, infinite extension of β1, in which process j fails right after round 1.
• By agreement, all nonfaulty processes must decide 0 in γ 0, 1 in γ 1.
• But γ 0 and γ 1 are indistinguishable to all nonfaulty processes, so they
can’t decide differently, contradiction.
k-Agreement
k-agreement
• Usually called k-set agreement or k-set
consensus.
• Generalizes ordinary stopping agreement by
allowing k different decisions instead of just one.
• Motivation:
– Practical:
• Allocating shared resources, e.g., agreeing on small number
of radio frequencies to use for sending/receiving broadcasts.
– Mathematical:
• Natural generalization of ordinary 1-agreement.
• Elegant theory: Nice topological structure, tight bounds.
The k-agreement problem
• Assume:
– n-node complete undirected graph
– Stopping failures only
– Inputs, decisions in finite totally-ordered set V (appear
in state variables).
• Correctness conditions:
– Agreement:
• ∃ W ⊆ V, |W| = k, all decision values in W.
• That | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
).
• Correctness conditions:
– Agreement:
• ∃ W ⊆ V, |W| = k, all decision values in W.
• That is, there are at most k different decision values.
– Validity:
• Any decision value is some process’ initial value.
• Like strong validity for 1-agreement.
– Termination:
• All nonfaulty processes eventually decide.
FloodMin k-agreement algorithm
• Algorithm:
– Each process remembers the min value it has seen,
initially its own value.
– At each round, broadcasts its min value.
– Decide after some generally-agreed-upon number of
rounds, on current min value.
• Q: How many rounds are enough?
• 1-agreement: f+1 rounds
– Argument like those for previous stopping agreement
algorithms.
• k-agreement: ⎣f/k⎦ + 1 rounds.
• Allowing k values divides the runtime by k.
FloodMin correctness
• Theorem 1: FloodMin, for ⎣f/k⎦ + 1 rounds, solves k-
agreement.
• Proof:
• Define M(r) = set of min values of active (not-yet-failed)
processes after r rounds.
• This set can only decrease over time:
• Lemma 1: M(r+1) ⊆ M(r) for every r, 0 ≤ r ≤ ⎣f/k⎦.
• Proof: Any min value after r+1 is someone’s min value
after r.
Proof of Theorem 1, cont’d
• Lemma 2: If at most d-1 processes fail during round r,
then |M(r)| ≤ d.
• E.g., for d = 1: If no one fails during round r then all have
the same min value after r.
• Proof: Show contrapositive.
– Suppose that |M(r)| > d, show at least d processes fail in round r.
– Let m = max (M(r)).
– Let m′ < m be any other element of M(r).
– Then m′ ∈ M(r-1) by Lemma 1.
– Let i be a process active after r-1 rounds that has m′ as its min
value after r-1 rounds.
– Claim i fails in round r | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
i be a process active after r-1 rounds that has m′ as its min
value after r-1 rounds.
– Claim i fails in round r:
• If not, everyone would receive m; in round r.
• Means that no one would choose m > m′ as its min, contradiction.
– But this is true for every m′ < m in M(r), so at least d processes
fail in round r.
Proof of Theorem 1, cont’d
• Validity: Easy
• Termination: Obvious
• Agreement: By contradiction.
– Assume an execution with > k different decision values.
– Then the number of min values for active processes after the full
⎣f/k⎦ + 1 rounds is > k.
– That is, |M(⎣f/k⎦ + 1)| > k.
– Then by Lemma 1, |M(r)| > k for every r, 0 ≤ r ≤ ⎣f/k⎦+1.
– So by Lemma 2, at least k processes fail in each round.
– That’s at least (⎣f/k⎦+1) k total failures, which is > f failures.
– Contradiction!
Lower Bound (sketch)
• Theorem 2: Any algorithm for k-agreement requires ≥ ⎣f/k⎦ + 1 rounds.
• Recall old proof for f+1-round lower bound for 1-agreement.
– Chain of executions for assumed algorithm:
α0 ----- α1 ----- …-----αj -----αj+1 ----- …-----αm
– Each execution has a unique decision value.
– Executions at ends of chain have specified decision values.
– Two consecutive executions look the same to some nonfaulty process,
who (therefore) decides the same in both.
• Argument doesn’t extend immediately to k-agreement:
– Can’t assume a unique value in each execution.
– Example: For 2-agreement, could have 3 different values in 2
consecutive executions without violating agreement.
•
Instead, use a k-dimensional generalized chain.
Lower bound
• Assume, for contradiction:
– n-process k-agreement algorithm tolerating f failures.
– All processes decide just after round r, where r ≤ ⎣f/k⎦.
– All-to-all communication at all rounds.
– n ≥ f + k | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
processes decide just after round r, where r ≤ ⎣f/k⎦.
– All-to-all communication at all rounds.
– n ≥ f + k + 1 (so each execution we consider has at least k+1
nonfaulty processes)
– V = {0,1,…,k}, k+1 values.
• Get contradiction by proving
existence of an execution with ≥ k + 1
different decision values.
• Use k-dimensional collection of
executions rather than 1-dimensional.
– k = 2: Triangle
– k = 3: Tetrahedron, etc.
Labeling nodes with executions
Initial values
All 0s
Os and 2s
Os and 1s
• Bermuda Triangle (k = 2): Any
algorithm vanishes somewhere in
the interior.
• Label nodes with executions:
– Corner: No failures, all have same
initial value.
– Boundary edge: Initial values
chosen from those of the two
endpoints
– For k > 2, generalize to boundary
faces.
– Interior: Mixture of inputs
• Label so executions “morph
gradually” in all directions:
• Difference between two adjacent
executions along an outer edge:
– Remove or add one message, to a
process that fails immediately.
– Fail or recover a process.
– Change initial value of failed
process.
All 2s
1s and 2s
All 1s
Labeling nodes with
process names
• Also label each node with the name of a process that is nonfaulty in
the node’s execution.
• Consistency: For every tiny triangle (simplex) T, there is a single
execution β, with at most f faults, that is “compatible” with the
executions and processes labeling the corners of T:
– All the corner processes are nonfaulty in β.
– If (α′,i) labels some corner of T, then α′ is indistinguishable by i from β.
• Formalizes the “gradual morphing” property.
• Proof by laborious construction.
• Can recast chain arguments for 1-agreement in this style:
β
α0 ----- α1 ----- …----- αj ----- αj+1 ----- …----- αm
pj+1 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
reement in this style:
β
α0 ----- α1 ----- …----- αj ----- αj+1 ----- …----- αm
pj+1 pm
p0
p1 …. pj
– β indistinguishable by pj from αj
– β indistinguishable by pj+1 from αj+1
Bound on rounds
• This labeling construction uses the assumption r
≤ ⎣f / k⎦, that is, f ≥ r k.
• How:
– We are essentially constructing chains simultaneously
in k directions (2 directions, in the Bermuda Triangle).
– We need r failures (one per round) to construct the
“chain” in each direction.
– For k directions, that’s r k total failures.
• Details LTTR (see book, or paper [Chaudhuri,
Herlihy, Lynch, Tuttle])
Coloring the nodes
• Now color each node v with a
“color” in {0,1,…,k}:
– If v is labeled with (α,i) then
color(v) = i’s decision value in α.
• Properties:
– Colors of the major corners are
all different.
– Color of each boundary edge
node is the same as one of the
endpoint corners.
– For k > 2, generalize to
boundary faces.
• Coloring properties follow from
Validity, because of the way the
initial values are assigned.
All 0s
Os and 2s
Os and 1s
All 2s
1s and 2s
All 1s
Sperner Colorings
All 0s
Os and 2s
Os and 1s
• A coloring with the listed
properties (suitably
generalized to k dimensions)
is called a “Sperner Coloring”
(in algebraic topology).
• Sperner’s Lemma: Any
Sperner Coloring has some
tiny triangle (simplex) whose
k+1 corners are colored by
all k+1 colors.
• Find one?
All 2s
1s and 2s
All 1s
Applying Sperner’s Lemma
• Apply Sperner’s Lemma to the coloring we constructed.
• Yields a tiny triangle (simplex) T with k+1 different colors | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
• Apply Sperner’s Lemma to the coloring we constructed.
• Yields a tiny triangle (simplex) T with k+1 different colors on its
corners.
• Which means k+1 different decision values for the executions and
processes labeling its corners.
• But consistency for T yields a single execution β, with at most f
faults, that is “compatible” with the executions and processes
labeling the corners of T:
– All the corner processes are nonfaulty in β.
– If (α′,i) labels some corner of T, then α′ is indistinguishable by i from β.
• So all the corner processes behave the same in β as they do in their
own corner executions, and decide on the same values as in those
executions.
• That’s k+1 different decision values in one execution with at most f
faults.
• Contradicts k-agreement.
Approximate Agreement
Approximate Agreement problem
• Agreement on real number values:
– Readings of several altimeters on an aircraft.
– Values of approximately-synchronized clocks.
• Consider with Byzantine participants, e.g., faulty hardware.
• Abstract problem:
– Inputs, outputs are reals
– Agreement: Within ε.
– Validity: Within range of initial values of nonfaulty processes.
– Termination: Nonfaulty eventually decide.
• Assumptions: Complete n-node graph, n > 3f.
• Could solve by exact BA, using f+1 rounds and lots of
communication.
• But better algorithms exist:
– Simpler, cheaper
– Extend to asynchronous settings, whereas BA is unsolvable in
asynchronous networks.
Approximate agreement algorithm
[Dolev, Lynch, Pinter, Stark, Weihl]
• Use convergence strategy, successively narrowing the
interval of guesses of the nonfaulty processes.
– Take an average at each round.
– Because of Byzantine failures, need fault-tolerant average.
• Maintain val, latest estimate, initially initial value.
• At every round:
– Broadcast val, collect received values into multiset W.
– Fill in missing entries with any values.
– Calculate W′ = reduce(W), by discarding f largest and f smallest
elements.
– Calculate W″ = select(W′), by choosing the smallest value | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
Calculate W′ = reduce(W), by discarding f largest and f smallest
elements.
– Calculate W″ = select(W′), by choosing the smallest value in W′
and every f’th value thereafter.
– Reset val to mean(W″).
Example: n = 4, f = 1
Initial values: 1, 2, 3, 4
•
• Process 3 faulty, sends:
proc 1: 2 proc. 2: 100 proc 3: -100
• Process 1:
– Receives (1, 2, 2, 4), reduces to (2, 2), selects (2, 2), mean = 2.
• Process 2:
– Receives (1, 2, 100, 4), reduces to (2, 4), selects (2, 4), mean = 3.
• Process 4:
– Receives (1, 2, -100, 4), reduces to (1, 2), selects (1, 2), mean =
1.5.
One-round guarantees
• Lemma 1: Any nonfaulty process’ val after the round is in the range
of nonfaulty processes’ vals before the round.
• Proof: All elements of reduce(W) are in this range, because there
are at most f faults, and we discard the top and bottom f values.
• Lemma 2: Let d be the range of nonfaulty processes’ vals just
before the round. Then the range of nonfaulty processes’ vals after
the round is at most d / (⎣(n – (2f+1)) / f⎦ + 1).
• That is:
– If n = 3f + 1, then the new range is d / 2.
– If n = kf + 1, k ≥ 3, then the new range is d / (k -1).
• Proof: Calculations, in book.
• Example: n = 4, f = 1
– Initial vals: 1, 2, 3, 4, range is 3.
– Process 3 faulty, sends 2 to proc 1, 1 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
1, 2, 3, 4, range is 3.
– Process 3 faulty, sends 2 to proc 1, 100 to proc 2, -100 to proc 3.
– New vals of nonfaulty processes: 2, 3, 1.5
– New range is 1.5.
The complete algorithm
• Just run the 1-round algorithm repeatedly.
• Termination: Add a mechanism, e.g.:
– Each node individually determines a round by which it knows
that the vals of nonfaulty processes are all within ε.
• Collect first round vals, predict using known convergence rate.
– After the determined round, decide locally.
– Thereafter, send the decision value.
• Upsets the convergence calculation.
• But that doesn’t matter because the vals are already within ε.
• Remarks:
– Convergence rate can be improved somewhat by using 2-round
blocks [Fekete].
– Algorithm extends easily to asynchronous case, using an
“asynchronous round” structure we’ll see later.
Distributed Commit
Distributed Commit
• Motivation: Distributed database transaction processing
– A database transaction performs work at several distributed sites.
– Transaction manager (TM) at each site decides whether it would
like to “commit” or “abort” the transaction.
• Based on whether the transaction’s work has been successfully
completed at that site, and results made stable.
– All TMs must agree on whether to commit or abort.
• Assume:
– Process stopping failures only.
– n-node, complete, undirected graph.
• Require:
– Agreement: No two processes decide differently (faulty or not,
uniformity)
– Validity:
• If any process starts with 0 (abort) then 0 is the only allowed decision.
• If all start with 1 (commit) and there are no faulty processes then 1 is
the only allowed decision.
Correctness Conditions for Commit
• Agreement: No two processes decide differently.
• Validity:
– If any process starts with 0 then 0 is the only allowed decision.
– If all start with 1 and there are no faulty processes then 1 is the
only allowed decision.
– Note the asymmetry: Guarantee abort (0) if anyone wants to
abort; guarantee | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
faulty processes then 1 is the
only allowed decision.
– Note the asymmetry: Guarantee abort (0) if anyone wants to
abort; guarantee commit (1) if everyone wants to commit and no
one fails (best case).
• Termination:
– Weak termination: If there are no failures then all processes
eventually decide.
– Strong termination (non-blocking condition): All nonfaulty
processes eventually decide.
2-Phase Commit
• Traditional, blocking algorithm
(guarantees weak termination only).
• Assumes distinguished process 1,
acts as “coordinator” (leader).
• Round 1: All send initial values to
process 1, who determines the
decision.
p1
p2
p3
p4
• Round 2: Process 1 sends out the
decision.
• Q: When can each process actually decide?
• Anyone with initial value 0 can decide at the beginning.
• Process 1 decides after receiving round 1 messages:
– If it sees 0, or doesn’t hear from someone, it decides 0; otherwise
decides 1.
• Everyone else decides after round 2.
Correctness of 2-Phase Commit
• Agreement:
– Because decision is centralized (and
consistent with any individual initial
decisions).
• Validity:
– Because of how the coordinator decides.
• Weak termination:
– If no one fails, everyone terminates by end of
round 2.
• Strong termination?
– No: If coordinator fails before sending its
round 2 messages, then others with initial
value 1 will never terminate.
Add a termination protocol?
• We might try to add a termination
protocol: other processes try to detect
failure of coordinator and finish
agreeing on their own.
• But this can’t always work:
– If initial values are 0,1,1,1, then by validity,
others must decide 0.
– If initial values are 1,1,1,1 and process 1
fails just after deciding, and before sending
out its round 2 messages, then:
• By validity, process 1 must decide 1.
• By agreement, others must decide 1.
– But the other processes can’t distinguish
these two situations.
0
1
1
1
1
1 | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
, others must decide 1.
– But the other processes can’t distinguish
these two situations.
0
1
1
1
1
1
1
1
Complexity of 2-phase commit
• Time:
– 2 rounds
• Communication:
– At most 2n messages
3-Phase Commit [Skeen]
• Yields strong termination.
• Trick: Introduce intermediate stage, before actually
deciding.
• Process states classified into 4 categories:
– dec-0: Already decided 0.
– dec-1: Already decided 1.
– ready: Ready to decide 1 but hasn’t yet.
– uncertain: Otherwise.
• Again, process 1 acts as “coordinator”.
• Communication pattern:
p1
3-Phase Commit
• All processes initially uncertain.
• Round 1:
– All other processes send their initial values to p1.
– All with initial value 0 decide 0 (and enter dec-0 state)
– If p1 receives 1s from everyone and its own initial value is 1, p1
becomes ready, but doesn’t yet decide.
– If p1 sees 0 or doesn’t hear from someone, p1 decides 0.
• Round 2:
– If p1 has decided 0, broadcasts “decide 0”, else broadcasts “ready”.
– Anyone else who receives “decide 0” decides 0.
– Anyone else who receives “ready” becomes ready.
– Now p1 decides 1 if it hasn’t already decided.
• Round 3:
– If p1 has decided 1, bcasts “decide 1”.
– Anyone else who receives “decide 1”
decides 1.
3-Phase Commit
• Key invariants (after 0, 1, 2, or 3 rounds):
– If any process is in ready or dec-1, then all processes have initial value 1.
– If any process is in dec-0 then:
• No process is in dec-1, and no non-failed process is ready.
– If any process is in dec-1 then:
• No process is in dec-0, and no non-failed process is uncertain.
• Proof: LTTR.
– Key step: Third condition is preserved | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
process is in dec-0, and no non-failed process is uncertain.
• Proof: LTTR.
– Key step: Third condition is preserved when p1 decides 1 after round 2.
– In this case, p1 knows that:
• Everyone’s input is 1.
• No one decided 0 at the end of round 1.
• Every other process has either become ready or has failed (without deciding).
– Implies third condition.
• Note critical use of synchrony here:
– p1 infers that non-failed processes are ready just because round 2 is
completed.
– Without synchrony, would need positive acknowledgments.
Correctness conditions (so far)
• Agreement and validity follow, for these three
rounds.
• Weak termination holds
• Strong termination:
– Doesn’t hold yet---must add a termination protocol.
– Allow process 2 to act as coordinator, then 3,…
– “Rotating coordinator” strategy
3-Phase Commit
• Round 4:
– All processes send current decision status (dec-0, uncertain, ready, or dec-1) to p2.
– If p2 receives any dec-0’s and hasn’t already decided, then p2 decides 0.
– If p2 receives any dec-1’s and hasn’t already decided, then p2 decides 1.
– If all received values, and its own value, are uncertain, then p2 decides 0.
– Otherwise (all values are uncertain or ready and at least one is ready), p2 becomes
ready, but doesn’t decide yet.
• Round 5 (like round 2):
– If p1 has (ever) decided 0, broadcasts “decide 0”, and similarly for 1.
– Else broadcasts “ready”.
– Any undecided process who receives “decide()” decides accordingly.
– Any process who receives “ready” becomes ready.
– Now p2 decides 1 if it hasn’t already decided.
• Round 6 (like round 3):
– If p2 has decided 1, broadcasts “decide 1”.
– Anyone else who receives “decide 1” decides 1.
• Continue with subsequent rounds for p3, p4,…
Correctness
• Key invariants still hold:
– If any process is in ready or | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
.
• Continue with subsequent rounds for p3, p4,…
Correctness
• Key invariants still hold:
– If any process is in ready or dec-1, then all processes
have initial value 1.
– If any process is in dec-0 then:
• No process is in dec-1, and no non-failed process is ready.
– If any process is in dec-1 then:
• No process is in dec-0, and no non-failed process is
uncertain.
• Imply agreement, validity
• Strong termination:
– Because eventually some coordinator will finish the
job (unless everyone fails).
Complexity
• Time until everyone decides:
– Normal case 3
– Worst case 3n
• Messages until everyone decides:
– Normal case O(n)
• Technicality: When can processes stop sending
messages?
– Worst case O(n2)
Practical issues for 3-phase commit
• Depends on strong assumptions, which may be hard to
guarantee in practice:
– Synchronous model:
• Could emulate with approximately-synchronized clocks, timeouts.
– Reliable message delivery:
• Could emulate with acks and retransmissions.
• But if retransmissions add too much delay, then we can’t emulate
the synchronous model accurately.
• Leads to unbounded delays, asynchronous model.
– Accurate diagnosis of process failures:
• Get this “for free” in the synchronous model.
• E.g., 3-phase commit algorithm lets process that doesn’t hear from
another process i at a round conclude that i must have failed.
• Very hard to guarantee in practice: In Internet, or even a LAN, how
to reliably distinguish failure of a process from lost communication?
• Other consensus algorithms can be used for commit,
including some that don’t depend on such strong timing
and reliability assumptions.
Paxos consensus algorithm
• A more robust consensus algorithm, could be used for commit.
• Tolerates process stopping and recovery, message losses and
delays,…
• Runs in partially synchronous model.
• Based on earlier algorithm [Dwork, Lynch, Stockmeyer].
• Algorithm idea:
– Processes use unreliable leader election subalgorithm to choose
coordinator, who tries to achieve consensus.
– Coordinator decides based on active support from majority of processes.
– Does not assume anything based on | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
algorithm to choose
coordinator, who tries to achieve consensus.
– Coordinator decides based on active support from majority of processes.
– Does not assume anything based on not receiving a message.
– Difficulties arise when multiple coordinators are active---must ensure
consistency.
• Practical difficulties with fault-tolerance in the synchronous model
motivate moving on to study the asynchronous model (next time).
Next time…
• Modeling asynchronous systems
• Reading: Chapter 8
MIT OpenCourseWare
http://ocw.mit.edu
6.852J / 18.437J Distributed Algorithms
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/0628b22125dc0598ad8b1f27422a4f2c_MIT6_852JF09_lec06.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.334 Power Electronics
Spring 2007
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .
Chapter 2
Introduction to Rectifiers
Read Chapter 3 of “Principles of Power Electronics” (KSV) by J. G. Kassakian, M.
F. Schlecht, and G. C. Verghese, Addison-Wesley, 1991.
Start with simple half-wave rectifier (full-bridge rectifier directly follows).
−
Ld
id
D1
D2
+
+
Vx
−
+
Vo
−
VsSin(ωt)
Vx
π
Id
ω t
2π
VsSin(ωt)
D1 ON
D2 ON
Figure 2.1: Simple Half-wave Rectifier
In P.S.S.:
< vo > = < vx >
vs
π
=
10
(2.1)
2.1. LOAD REGULATION
If Ld Big
id ≃
→
Id =
vs
πR
If L
d
R
2
π
ω
≫
⇒
we can approximate load as a constant current.
2.1 Load Regulation
Now consider adding some ac-side inductance Lc (reactance Xc = ωLc).
.
11
(2.2)
Common situation: Transformer leakage or line inductance, machine winding
•
inductance, etc.
Lc is typically
•
≪
Ld (filter inductance) as it is a parasitic element.
Lc
Ld
VsSin(ωt)
D1
D2
+
Vx
−
R
Figure 2.2: Adding Some AC-Side Inductance
Assume Ld ∼ ∞
(so ripple current is small). Therefore, we | https://ocw.mit.edu/courses/6-334-power-electronics-spring-2007/0628c2bcc2c6922aa21b9f5145708dd9_chp2.pdf |
: Adding Some AC-Side Inductance
Assume Ld ∼ ∞
(so ripple current is small). Therefore, we can approximate load
as a “special” current source.
“Special” since < vL >= 0 in P.S.S.
Id =<
⇒
vx
R
>
(2.3)
Assume we start with D2 conducting, D1 off (V sin(ωt) < 0). What happens when
V sin(ωt) crosses zero?
12
CHAPTER 2. INTRODUCTION TO RECTIFIERS
Lc
i1
D1
VsSin(ωt)
D2
i2
Id
Figure 2.3: Special Current
D1 off no longer valid.
•
But just after turn on i1 still = 0.
•
Therefore, D1
and D2
are both on d
uring a commutation period, where current
switches from D2
to D1.
Lc
i1
D1
VsSin( ω t)
+
D2 Vx
_
i2
Id
Figure 2.4: Commutation Period
D2 will stay on as long as i2 > 0 (i1 < Id).
Analyze:
Vs sin(ωt)
di1
dt
=
i1(t) =
=
=
1
Lc
ωt Vs
0 ωLc
Z
Vs
ωLc
Vs
ωLc
[1
−
cos(Φ)
0
|ωt
cos(ωt)]
sin(ωt)d(ωt)
(2.4)
2.1. LOAD REGULATION
13
i1
Id
u
tω
Figure 2.5: Analyze Waveform
Commutation ends at ωt = u, when i1 = Id.
Commutation Period:
Id =
Vs
ωLc
[1
cos u]
⇒ | https://ocw.mit.edu/courses/6-334-power-electronics-spring-2007/0628c2bcc2c6922aa21b9f5145708dd9_chp2.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.