text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
.g., that something
eventually happens, like an algorithm terminating.
z Formally, execution (or fragment) D of A is fair to task T if
one of the following holds:
D is finite and T is not enabled in the final state of D.
D is infinite and contains infinitely many events in T.
D is infinite and contains infini... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
” relationship.
Then fairtraces(3 Ai) fairtraces(3 Aci).
Composition of channels and
consensus processes
init(v)1
decide(v)1
p1
s e n d ( m )
, 2
1
receive(m)
2,1
C1,2
C2,1
receive(m)
1,2
s e n d ( m ) 2 , 1
p2
In fair executions:
• After init, keep sending
latest val forever.
• All messages that are
... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
empty (null trace is always safe).
– Prefix-closed: Every prefix of a safe trace is safe.
– Limit-closed: Limit of sequence of safe traces is safe.
• Liveness property: “Good” thing happens
eventually:
– Every finite sequence over acts(P) can be extended to a
sequence in traces(P).
– “It's never too late.”
• Can... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
” proved to
relate the states of the two algorithms.
– Prove using induction.
• For asynchronous systems, things become
harder:
– Asynchronous model has more nondeterminism
(in choice of new state, in order of steps).
– So, harder to determine which execs to compare.
• One-way implementation relationship is
enou... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
traces(A) traces(B).
z This means all traces of A, not just finite traces.
z Proof: Fix a trace of A, arising from a (possibly
infinite) execution of A.
z Create a corresponding execution of B, using an
iterative construction.
ʌ1
ʌ2
ʌ3
ʌ4
ʌ5
s4,A
s3,A
s2,A
s1,A
s0,A
s5,A
Simulation relations
z Theorem: I... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
) traces(C).
Recall: Channel automaton
send(m)
C
receive(m)
z Reliable unidirectional FIFO channel.
z signature
Input actions: send(m), m M
output actions: receive(m), m M
no internal actions
z states
queue: FIFO queue of M, initially empty
Channel automaton
send(m)
C
receive(m)
z trans
send(m... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
B
A
receive(m)
s R u iff u.queue is concatenation of s.A.queue and s.B.queue
z Step correspondence:
S = send(m) in D corresponds to send(m) in C
S = receive(m) in D corresponds to receive(m) in C
S = pass(m) in D corresponds to O in C
z Verify that this works:
Actions of C are enabled.
Final states related ... | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.852J / 18.437J Distributed Algorithms
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-852j-distributed-algorithms-fall-2009/666ea5c2d621cd26c6badf37a750d290_MIT6_852JF09_lec07.pdf |
15.083J/6.859J Integer Optimization
Lecture 7: Ideal formulations III
Slide 1
Slide 2
1 Outline
• Minimal counterexample
• Lift and project
2 Matching polyhedron
� �
� � �
Pmatching = x
�
xe = 1, i ∈ V,
e∈δ({i})
�
xe ≥ 1, S ⊂ V, |S| odd, |S| ≥ 3,
e∈δ(S)
�
0 ≤ xe ≤ 1, e ∈ E .
• F set of perfect matchi... | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/66ab4abef4546bd4b6fd6c3ef9440c45_MIT15_083JF09_lec07.pdf |
a S ⊂ V with |S| odd, |S| ≥ 3, |V \ S| ≥ 3, and
�
e∈δ(S)
xe = 1.
• Contract V \ S to a single new node u, to obtain G(cid:3) = (S ∪ {u}, E(cid:3)).
• xe
(cid:3) = xe for all e ∈ E(S), and for v ∈ S,
(cid:3)
x{u,v} =
x{v,j}.
�
x (cid:3) satisfies constraints with respect to G(cid:3) .
{j∈V \S,{v,j}∈E}
1
• ... | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/66ab4abef4546bd4b6fd6c3ef9440c45_MIT15_083JF09_lec07.pdf |
≤ b(1 − xj )
(∗)
λM � μM �� M
χ
xe
Slide 4
and substitute yij = xixj for i, j = 1, . . . , n, i (cid:3)= j and xj = xj
the resulting polyhedron.
2. Let Lj (P ) be
• (Project) Project Lj (P ) back to the x variables by eliminating variables y.
Let Pj be the resulting polyhedron, i.e., Pj = (Lj (P ))x.
3.1 Theore... | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/66ab4abef4546bd4b6fd6c3ef9440c45_MIT15_083JF09_lec07.pdf |
−ej
(cid:3) x(1 − xj ) = −xj (1 − xj ) ≤ −(1 − xj ).
Replacing xj
Pj ⊆ P , we conclude that
2 by xj , we obtain that xj ≥ 1 is valid for Pj . Since, in addition,
Pj ⊆ P ∩ {x ∈ Rn | xj = 1} = conv(P ∩ {x ∈ Rn | xj ∈ {0, 1}}).
2
• Similarly, if P ∩ {x ∈ Rn | xj = 1} = ∅, then
Pj ⊆ conv(P ∩ {x ∈ Rn | xj ∈ {0, 1}}... | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/66ab4abef4546bd4b6fd6c3ef9440c45_MIT15_083JF09_lec07.pdf |
that for all x ∈ P ,
(cid:3) x + ν(1 − xj ) ≤ α.
a
• For all x satisfying (*),
• Hence,
(1 − xj )(a (cid:3) x + λxj ) ≤ (1 − xj )α
xj (a (cid:3) x + ν(1 − xj )) ≤ xj α.
(cid:3) x + (λ + ν)(xj − xj
a
2) ≤ α.
• After setting x 2
j = xj we obtain that for all x ∈ Pj , a (cid:3) x ≤ α, thus all valid
inequalities f... | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/66ab4abef4546bd4b6fd6c3ef9440c45_MIT15_083JF09_lec07.pdf |
y = 0,
2x1 − y ≥ 0
−x2 + y ≥ 0
y ≤ 0
x2 − y ≤ 2 − 2x1
x1 ≥ 0
0 ≥ 0
y ≥ 0
x2 − y ≥ 0.
x1 ≥ 0
−x2 ≥ 0
x2 ≤ 2 − 2x1
x1 ≥ 0
x2 ≥ 0,
which leads to
P1 = {(x1, x2)(cid:3) | 0 ≤ x1 ≤ 1, x2 = 0}
= conv(P ∩ {(x1, x2)(cid:3) | x1 ∈ {0, 1}}).
3.3 Convex hull
• Pi1 ,i2,...,it = ((Pi1 )i2
.
. . .)it
• Theorem: ... | https://ocw.mit.edu/courses/15-083j-integer-programming-and-combinatorial-optimization-fall-2009/66ab4abef4546bd4b6fd6c3ef9440c45_MIT15_083JF09_lec07.pdf |
6.825 Techniques in Artificial Intelligence
Problem Solving and Search
Problem Solving
Lecture 2 • 1
Last time we talked about different ways of constructing agents and why it is
that you might want to do some sort of on-line thinking. It seems like, if you
knew enough about the domain, that off-line you could do... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
• World state is finite, small enough to enumerate
• World is deterministic
The world dynamics are deterministic: when the world is in some state and
the agent does some action, there’s only one thing that could happen in the
world.
Lecture 2 • 4
4
6.825 Techniques in Artificial Intelligence
Problem Solving and... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
• Utility for a sequence of states is a sum over path
• Agent knows current state
Relaxation of assumptions later in the course
Lecture 2 • 8
We’re going to relax the assumption that the world state is small when we
talk about logic, because logic gives us a way to do abstraction that lets us
deal with very large... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
[learning]
[logic]
• World is deterministic [uncertainty]
• Utility for a sequence of states is a sum over path
• Agent knows current state [logic, uncertainty]
Few real problems are like this, but this may be a
useful abstraction of a real problem
Relaxation of assumptions later in the course
Lecture 2 • 12
L... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
abstraction. If I give you a map that has
dots on it, standing for the towns that somebody thought were big enough to
merit a dot, somebody decided that was a good level of abstraction to think
about driving around this place.
15
Example: Route Planning in a Map
A map is a graph where nodes are cities and links
... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
that’s not always the case. It depends on how you formulate the
problem. For example, assume you have enough gas to go 20 miles. Then,
you’re going to have a situation where any path that’s longer than 20 miles
has really bad utility and any shorter path is ok. And, that can be hard to
express as a sum. So, there a... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
operator is basically a mapping from states to states; we’ve said
it’s deterministic and we know what it is. There’s a set of operators, and
each operator moves forward or fills up the gas tank or colors in the square
in front of me. Each operator says, if I’m in one state, what’s the next one.
22
Formal Definitio... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
and operator star, that means some sequence of them. I could have written
just a sequence of states into real numbers; why didn’t I? Assume that there
are three ways of getting home, one involves walking, the other involves
driving, and another involves taking the train. They may all get me home but
it may matter w... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
you want to have one kind of tradeoff; computing
where to go to graduate school, there are other decisions that merit a lot
more thought. It has to do with how much time pressure there is and the
magnitude of the consequences of the decision.
25
Route Finding
O
71
Z
151
S
99
F
75
A
140
118
T
111
L
2... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
map of Romania. It has a lot more detail, but still
leaves out everything but the raw geography.
Lecture 2 • 27
27
Search
Lecture 2 • 28
So, we’re going to do searching; we’ll cover the basic methods really fast.
But first, let’s write the general structure of these algorithms. We have
something called an “agen... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
of search and may
have a huge impact on its efficiency and/or the quality of the solutions it
produces.
Lecture 2 • 34
34
Depth-First Search
O
Z
S
F
A
R
T
L M
D
C
B
P
Lecture 2 • 35
Let’s start by looking at Depth-First Search (DFS). The search strategy is
entirely defined by how we choose a node fr... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
TA
O
Z
S
F
A
R
T
L M
D
C
B
P
So, we pop A off the stack, expand it, and then push Z,S,T.
Lecture 2 • 38
38
Depth-First Search
• Treat agenda as a stack (get most recently added node)
• Expansion: put children at top of stack
• Get new nodes from top of stack
A
ZA SA TA
O
Z
S
F
A
R
T
L M
D ... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
and we want to be sure to get the
shortest
• Method 2:
• Don’t expand a node (or add it to the agenda) if
it has already been expanded.
• We’ll adopt this one for all of our searches
Lecture 2 • 42
Another way to fix the problem is to say that we’re not going to expand a
node or add a node to the agenda if it h... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
been added to the agenda, it has not yet been expanded.
The red subscript letters indicate the path that was followed to this node).
44
Depth-First Search
• Treat agenda as a stack (get most recently added node)
• Expansion: put children at top of stack
• Get new nodes from top of stack
A
ZA SA TA
OAZ SA TA
S... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
D
C
B
P
Lecture 2 • 48
This graph doesn’t have any dead ends, so we don’t have to backtrack, we’ll
eventually find our way to the goal. But, will we find our way there the
shortest way? Not necessarily. In our example, we ended up with the path
AZOSFB, which has a higher path cost than the one that starts by go... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
T
L M
D
C
B
P
Let
• b = branching factor
• m = maximum depth
• d = goal depth
• O(bm) time
Lecture 2 • 52
So, how much time, in big O terms, does DFS take? Does it depend on how
close the goal is to you? In the worst case, no. You could go all the way
through the tree before you find the one path where th... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
, this has the
effect of expanding cities going out by depth.
54
Breadth-First Search
• Treat agenda as a queue (get least recently added node)
• Expansion: put children at end of queue
• Get new nodes from the front of queue
A
O
Z
S
F
A
R
T
L M
D
C
B
P
So, in our example, we start with A.
Lecture ... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
• Get new nodes from the front of queue
A
ZA SA TA
SA TA OAZ
TA OAZ OAS FAS RAS
OAZ OAS FAS RAS LAT
O
Z
S
F
A
R
T
L M
D
C
B
P
Next, we pop T and add L.
Lecture 2 • 59
59
Breadth-First Search
• Treat agenda as a queue (get least recently added node)
• Expansion: put children at end of queue
• Get... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
A
ZA SA TA
SA TA OAZ
TA OAZ OAS FAS RAS
OAZ OAS FAS RAS LAT
OAS FAS RAS LAT
RAS LAT BASF
Result =
B
ASF
Now we add B to the agenda. In some later searches, it will be important to
keep going, because we might find a shorter path to B. But in BFS, the
minute we hit the goal state, we can stop and declare vict... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
) time
• O(bd) space
O
Z
S
F
A
R
T
L M
D
C
B
P
Lecture 2 • 65
How much space? What do we have to remember in order to go from one
level to the next level? O(b^d). In order to make the list of nodes at level 4,
you need to know all the level 3 nodes. So, the drawback of BFS is that we
need O(b^d) space.... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
: Perform a sequence of DFS searches with
increasing depth-cutoff until goal is found.
DFS cutoff
depth
1
2
3
4
…
d
Space
O(b)
O(2b)
O(3b)
O(4b)
…
O(db)
Time
O(b)
O(b2)
O(b3)
O(b4)
…
O(bd)
Total
Max = O(db)
Sum = O(bd+1)
Lecture 2 • 67
It seems a little bit wasteful. The crucial thing is that these steps ar... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
Uniform Cost Search
• Breadth-first and Iterative-Deepening find path
with fewest steps (hops).
• If steps have unequal cost, this is not interesting.
• How can we find the shortest path (measured by
sum of distances along path)?
Lecture 2 • 69
If you wanted to find the shortest path from Araj to Bucharest and y... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
in queue
• Explores paths in contours of total path length;
finds optimal path.
Lecture 2 • 71
What can we say about this search method? Is it going to give us the
shortest path to a goal? Yes, because we’re exploring cities in contours of
real cost (as opposed to BFS where we explored them in order of number of ... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
75
M
70
146 138
C
Lecture 2 • 74
Then we remove it and add Z with cost 75, T with cost 118, and S with cost
140.
74
Uniform Cost Search
A
Z75 T118 S140
T118 S140 O146
O
71
Z
151
S
99
F
75
A
140
118
T
111
L
211
90
R
120
97
P
B
101
D
75
M
70
146 138
C
Lecture 2 • 75
Now, we remove... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
239, and R with path length 230.
77
Uniform Cost Search
A
Z75 T118 S140
T118 S140 O146
S140 O146 L229
O
71
Z
151
S
99
F
75
A
140
O146 L229 R230 F229 O291
L229 R230 F229 O291
118
T
111
L
211
90
R
120
97
P
B
101
D
75
M
70
146 138
C
Lecture 2 • 78
Now we remove O, and since it doesn’t ... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
expand F and add B with a path cost of
440. But we’re not willing to stop yet, because there might be a shorter path.
Eventually, we’ll add B with a length of 428 (via S, R, and P). And when
that’s the shortest path left in the agenda, we’ll know it’s the shortest path to
the goal.
80
Uniform Cost Search
A
Z75 ... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
it in.
• The algorithm is optimal only when the costs
are non-negative.
Lecture 2 • 82
There is one cautionary note. The algorithm is optimal only in some
circumstances. What? We have to disallow something; it doesn’t come up
in road networks. Let’s say that your cost is money, not distance. Distance
has the pro... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
, I can make a very big tree that has most of the b^m
possible fringe elements in the agenda, so the space cost could be as bad
as b^m, but it’s probably going to be more like b^d in typical cases.
Lecture 2 • 84
84
Uninformed vs. Informed Search
• Depth-first, breadth-first and uniform-cost searches are
uninfor... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
know whether you’re headed in the right
direction. We can use this information as a “heuristic”. Twenty years ago, AI
was sort of synonymous with heuristics (it usually defined as a “rule of
thumb”; it’s related to the Greek word “eureka,” which means “I found it!”).
It’s usually some information that helps you but... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
two different nodes in our search tree, but there’s really only one underlying
state in the world.
89
Uninformed vs. Informed Search
• Depth-first, breadth-first and uniform-cost searches are
uninformed.
• In informed search there is an estimate available of the cost
(distance) from each state (city) to the goal... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
an estimate of how far it is from X to the goal state,
we could use that to help decide where we should go next.
92
Using Heuristic Information
• Should we go to X or Y?
• Uniform cost says go to X
• If
h(Y), this
h(X)
À
should affect our choice
1
200
X
300
Y
1
Lecture 2 • 93
How would you like to use... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
*.
Lecture 2 • 95
95
Admissibility
• What must be true about
h for A* to find optimal
path?
X
h=100
1
h=0
2
7
3
Y
h=1
1
h=0
Let’s think about when A* is going to be really good and when A* is going to
be not so good. What has to be true about h for A* to find the optimal path?
Lecture 2 • 96
96
Admi... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
but you have
an h of 100. On the other branch, it costs 73 to get to Y, and you have an h
of 1 and in fact it costs 1 to get to the goal. So, the path via X has total cost
3 and the path via Y has total cost 74.
What would happen if we did A* search in this example? We’d put node X in
the agenda with priority (g +... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
paths. No “bias”
towards goal.
x
B
goal
x
A
start
Assume states are points
the Euclidean plane.
Lecture 2 • 100
In this slide and the next, we just want to gain some intuition about why A* is
a better search method than uniform cost.
Imagine that there are a lot of other roads radiating from the start state... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
0
is admissible heuristic (when all costs are non-
negative).
Uniform cost search is an instance of A* then. What’s the heuristic? h=0.
It’s admissible. If you say that everywhere I go, that’s it, I’m going to be at
the goal (being very optimistic) that’s going to give you uniform cost search.
Lecture 2 • 102
102... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
relax the problem and say, let’s find the way from A
to B given that we can go any way we want to. And, sometimes you can
take a hard problem and find a relaxed problem that’s really easy to solve
such that the cost of any path in the relaxed problem is less than the
corresponding path in the original problem and t... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
the best
state in the space. One view that you could take of a problem like this is to
say “who needs the operators?” It’s not like you are trying to think about how
to really walk around in this space, but now you can think about these
operators not as things that you do in the world to move you from city to city,... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
order of visit
changes.
• A state in the search for TSP solution is a complete
tour of the cities.
• An operator is not an action in the world that
moves from city to city, it is an action in the
information space that moves from one potential
solution (tour) to another.
Now, we can think about solving TSP by t... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
3, let r =
x – r
→
x
0.01
• Next guesses are: 2.880, 2.805, 2.756, …, 2.646
Lecture 2 • 112
Another example of search is any kind of gradient descent method. Let’s
say that you’d like to find the square root of a number. You can actually
apply this in continuous spaces. What are the states? The states are real... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
can prove something about this
search. You can run a deterministic algorithm and prove that you’re going to
an optimal point in the space if you start out right.
113
Multiple Minima
• Most problems of interest do not have unique global
minima that can be found by gradient descent from
an arbitrary starting point... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
– v0
– If v0
– Else accept new x with probability exp(-(v0
< v then accept new x [ x
x0
]
←
– v)/kT)
• T = 0.95T /* for example */
• At high temperature, most moves accepted (and can move
between “basins”)
• At low temperature, only moves that improve energy are
accepted
Lecture 2 • 116
Probably many of yo... | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
function in that
domain? (from 3.17)
What would be a good heuristic function for the
Towers of Hanoi problem? (look this up on the
web, if you don’t know about it)
Other practice problems that we might talk about in
recitation: 4.4, 4.11a,b
Lecture 2 • 118
118 | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/66ac2a9beea4e159bbf040d1d5b9a113_Lecture2Final.pdf |
Summary from previous lecture
• Laplace transform
L [f (t)] ≡ F (s) =
+
∞
0
Z
−
f (t)e−
stdt.
L [u(t)] ≡ U (s) =
1
s
.
L
at
e−
=
1
s + a
.
• Transfer functions and impedances
£
¤
= sF (s) − f (0−).
L
˙f (t)
h
i
t
L
f (ξ)dξ
=
0
·Z
−
¸
F (s)
s
.
f (t)
x(t)
⇔
TF(s) =
F (s)
X(s)
Z(s) =
X(s)
F (s)
F (s)
X(s)
Ts(s)
Ω(s)
J
b
... | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
+
+
+
• Collisions between the mobile charges and the material fabric (ions,
generally disordered) lead to energy dissipation
(loss). As result,
energy must be expended to generate current along the resistor;
i.e., the current flow requires application of potential across the
resistor
v(t) = Ri(t) ⇒ V (s) = RI(s) ⇒
... | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
(t) = Cv(t) ⇒
dq(t)
dt
≡ i(t) = C
dv(t)
dt
•
in Laplace domain:
I(s) = CsV (s) ⇒
V (s)
I(s)
≡ ZC (s) =
1
Cs
2.004 Fall ’07
Lecture 04 – Wednesday, Sept. 12
Inductance
+
v(t)
B(t)
i(t)
−
• Current flow i around a loop results in magnetic field B pointing
normal to the loop plane. The magnetic field counteracts change... | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
0
P
Vk(s) = 0
Lecture 04 – Wednesday, Sept. 12
P
P
P
2.004 Fall ’07
Impedances in series and in parallel
I1
Z1
Z2
I2
+
V1
−
+
V2
−
−
+
V
Impedances in series
KCL: I1 = I2 ≡ I.
KVL: V = V1 + V2.
From definition of impedances:
I
Z1
I1
+
V1
−
Z2
I2
+
V2
−
Impedances in parallel
KCL: I = I1 + I2.
KVL: V1 + V2 ≡ V .
From d... | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
Z1 + Z2
2.004 Fall ’07
Lecture 04 – Wednesday, Sept. 12
Example: the RC circuit
Z1 = R
+
Vi
−
+
−
Z2 =
1
Cs
+
VC
−
Block diagram & Transfer Function
Vi
1
1 + RCs
VC
We recognize the voltage divider configuration, with the voltage across the ca-
pacitor as output. The transfer function is obtained as
TF(s) =
VC(s)
Vi(s... | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
ptotically (VC→V0 as t→∞)
−
−
−
−
−−−
−
−
+
+
+
t [msec]
2.004 Fall ’07
Lecture 04 – Wednesday, Sept. 12
+
−
−
−
−
−
−
−
Example: RLC circuit with voltage source
R
L
v(t)
+
-
i (t)
C
+
-
vC(t)
V (s)
+
−
Figure by MIT OpenCourseWare.
+
− +
VL(s)
−
VR(s)
Ls
R
1
Cs
+
VC(s)
−
Figure 2.3
V(s)
Figure 2.4
1
LC
R
s2 + +
L
s
... | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
;
Ia = 0
(because Vo must remain finite) therefore
I1 + I2 = 0;
Vi − V1 = Vi = I1Z1;
Vo − V1 = Vo = I2Z2.
Combining, we obtain
Vo(s)
Vi(s)
=
− 2(s)
Z
Z1(s)
.
2.004 Fall ’07
Lecture 04 – Wednesday, Sept. 12 | https://ocw.mit.edu/courses/2-004-systems-modeling-and-control-ii-fall-2007/66b3e0a9d62ad39e9737b36b813060c6_lecture04.pdf |
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
6.265/15.070J
Lecture 11-Additional material
Fall 2013
10/9/2013
Martingale Convergence Theorem
Content.
1. Martingale Convergence Theorem
2. Doob’s Inequality Revisited
3. Martingale Convergence in Lp
4. Backward Martingales. SLLN Using Backward Martingale
5. Hewitt-Sav... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
. Clearly,
UN [a, b](ω) is non-decreasing in N . Let U∞[a, b](ω) = limN →∞ UN [a, b](ω).
Then (1) can be re-written as
Λ = ∪a<b:a,b∈Q{ω : U∞[a, b](ω) = ∞}
= ∪a<b:a,b∈QΛa,b.
(2)
Doob’s upcrossing lemma proves that P(Λa,b) = 0 for every a < b. Then we
have from (2) that P(Λ) = 0. Thus, Xn(ω) converges in [−∞, ∞] a... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
Cn as follows.
C1(ω) =
if X0(ω) < a
1,
0, otw.
2
Inductively,
Cn(ω) =
⎧
⎪⎨
⎪⎩
if Cn−1(ω) = 1 and Xn−1(ω) ≤ b
1,
1, if Cn−1(ω) = 0 and Xn−1(ω) < a
0, otw.
By definition, Cn is predictable. The sequence Cn has the following property.
If X0 < a then C1 = 1. Then the sequence Cn remains equal to 1 unti... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
such that Xt(ω) > b. Without the loss of generality, assume that
s1 = min{n : Xn < a}. Let, sk+1 = min{n > tk : Xn
(ω) < a}. Then,
YN (ω) =
Cj (ω)(Xj (ω) − Xj−1(ω))
j≤N
=
[
1≤i≤k si≤t≤li
Ct+1(ω)(Xt+1(ω) − Xt(ω))]
+
Ct+1(ω)(Xt+1(ω) − Xt(ω)) (Because otherwise Ct(ω) = 0.)
t≥sk+1
=
(Xli (ω) − Xsi (ω)) + XN (ω) − ... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
b, P(Λa,b) = 0.
Proof. By definition Λa,b = {ω : U∞[a, b] = ∞}. Now by Doob’s Lemma
(b − a)E[UN [a, b]] ≤ E[(XN − a)−]
≤ sup E[|Xn|] + |a|
n
< ∞
Now, UN [a, b] / U∞[a, b]. Hence by the Monotone Convergence Theorem,
E[UN [a, b]] / E[U∞[a, b]]. That is, E[U∞[a, b]] < ∞. Hence, P(U∞[a, b] =
∞) = 0.
2
Doob’s Inequa... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
) ≤ E[XN 1(A)]
λP(A) ≤ E[Xn1(A)]
≤ E[X +1(A)]
n
≤ E[X +].
n
(4)
(5)
(6)
(7)
(8)
Suppose, Xn is non-negative sub-MG. Then,
1
λ
P( max Xk ≥ λ) ≤ E[Xn]
0≤k≤n
If it were MG, then we also obtain
1
P( max Xk ≥ λ) ≤ E[Xn] = E[X0]
λ
0≤k≤n
1
λ
3
Lp maximal inequality and Lp convergence
Theorem 3. Let, Xn be a... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
X = lim sup Xn.
n
For Lp convergence, we will use Lp-inequality of Theorem 3. That is,
E[( sup |Xm|)p] ≤ qpE[|Xn|p]
0≤m≤n
Now, sup0≤m≤n|Xm| / sup0≤m|Xm|. Therefore, by the Monotone Conver
gence Theorem we obtain that
E[sup |Xm|p] ≤ q sup E[|Xn|p] < ∞
p
0≤m
n
Thus, sup0≤m|Xm| ∈ Lp. Now,
|Xn − X| ≤ 2 sup |Xm|
0≤... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
p − 1
p
p − 1
E[(X +
n
1
)p] p E[(X ∗,M )(p−1)q] q
n
1
, by Holder’s inequality.
(9)
Here, 1 = 1 − 1 ⇒ q(p − 1) = p. Thus, we can simplify (9)
q
p
= qE[(X +)p] p
n
1 E[(X ∗,M )p]
n
1
q
||X ∗,M ||p ≤ q||X +||p||X ∗,M ||p
n
n
n
p
p
q
p(1− 1 )q
||X ∗,M ||p
n
||p ≤ q||X +||p.
n
≤ q||X +||p
n
Thus,
That... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
n is UI.
7
Proof of Theorem 5. Recall Doob’s convergence theorem’s proof. Let
Λ : = {ω : Xn(ω) does not converge to a limit in [−∞, ∞]}
= {ω : lim inf Xn(ω) < lim sup Xn(ω)}
n
n
= ∪a,b:a,b∈Q{ω : lim inf Xn(ω) < a < b < lim sup Xn(ω)}
n
n
= ∪a,b:a,b∈QΛa,b
Now, recall Un[a, b] is the number of upcrossing of ... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
→ −∞ (by Theorem 5)
Hence, E[X−∞; A] = E[X0; A]. Thus, X−∞ = E[X0|F−∞].
Theorem 7. Let Fn � F−∞, and Y ∈ L1. Then, E[Y |Fn] → E[Y |F−∞] a.s.
in L1 .
Proof. Xn = E[Y |Fn] is backward MG by definition. Therefore,
Xn → X−∞ a.s. and in L1 .
By Theorem 6, X−∞ = E[X0|F−∞] = E[Y |F−∞]. Thus, E[Y |Fn] → E[Y |F−∞].
8
5 ... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
∞
−∞
−∞
−∞
−∞
1
X
−∞
= lim
Sn
n→∞ n
= E[ξ
1]
6 Hewitt-Savage 0-1 Law
Theorem 9. Let X1, ..., Xn be i.i.d. and ξ be the exchangeable σ − algebra:
ξn = {A : πnA = A; ∀πn ∈ Sn}; ξ = ∪nξn
If A ∈ ξ, then P(A) ∈ {0, 1}.
Proof. The key to the proof is the following Lemma:
Lemma 3. Let X1, ..., Xk be i.i.d. and define
An(φ) =
1... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
to show that indeed E[φ(X1, ..., Xn)|ξ] is E[φ(X1, ..., Xn)].
First, we show that E[φ(X1, ..., Xn)|ξ] ∈ σ(Xk+1, ...) since φ is bounded.
Then, we find that if E[X|G] ∈ F where X is independent of F then E[X|G] is
constant, equal to E[X]. This will complete the proof of Lemma.
First step: consider An(φ). It has npk t... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
that
E[XY ] = E[X]E[Y ] = E[Y ]2
, since E[Y ] = E[X]. Now by definition of conditional expectation for any
Z ∈ G, E[XZ] = E[Y Z]. Hence, for Z = Y , we have E[XY ] = E[Y 2]. Thus,
E[Y 2] = E[Y ]2 ⇒ V ar(Y ) = 0
⇒ Y = E[Y ] a.s.
(13)
This completes the proof of the Lemma.
Now completing proof of H-S law.
We hav... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
φ(Xi1 , ..., Xik ).
An(φ) = E[An(φ)|ξn] = E[φ(X1, ..., Xn)|ξn]
→ E[φ(X1, ..., Xn)|ξ] by backward MG convergence theorem.
(14)
Since X1, ... may not be i.i.d., ξ can be nontrivial. Therefore, the limit need not
be constant. Consider a f : Rk−1 → R and g : R → R. Let In,k be set of all
11
distinct 1 ≤ i1, ..., ik... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
[f (X1, ..., Xk−1|ξ)]E[g(X1)|ξ] = E[f (X1, ..., Xk−1)g(Xk)|ξ]
(16)
Thus, we have using (16) that for any collection of bounded functions f1, ..., fk,
k
k
E[
fi(Xi)|ξ] =
k
k
E[fi(Xi)|ξ]
i=1
i=1
Message: given the “symmetry” assumption and given “exchangeable” statis
tics, the underlying r.v. conditionally beco... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/66b6c8fdb52304e3777ce8286beaaf7d_MIT15_070JF13_Lec11Add.pdf |
Coordinate Systems and Separation of Variables
Revisiting the wave equation… ∇ 2ψ+
2
1 ∂ ψ
2
c ∂t 2
= 0
where previously in Cartesian coordinates, the Laplacian was given by
∇ 2
=
∂ 2
∂x 2
+
2
∂
∂y 2
+
∂ 2
∂z 2
We are now faced with a spherical polar coordinate system, with the
motivation that we might employ... | https://ocw.mit.edu/courses/2-067-advanced-structural-dynamics-and-acoustics-13-811-spring-2004/66bcdda53876d6adf82f7c9796279ca8_lect_7_31.pdf |
⎛
sinθ dθ⎝
⎜ sinθ
dΘ ⎞
dθ⎠
⎡
⎢ n n + 1) −
⎟ +
⎣
(
m2 ⎤
sin 2 θ⎦
⎥ = Θ 0
1 d ⎛
2
r dr ⎝
⎜ r 2 dR
⎞
⎟
dr ⎠
2
+ R
k −
(
n n + 1)
2
r
R = 0
2
T d
1
2c dt 2
2
+ T
k = 0
Elevation Dependence: Legendre Functions
Legendre polynomials: P (x) = P (x)
n
0
n
Represent fields uniform in azimuth coordinate φ
Legend... | https://ocw.mit.edu/courses/2-067-advanced-structural-dynamics-and-acoustics-13-811-spring-2004/66bcdda53876d6adf82f7c9796279ca8_lect_7_31.pdf |
0.5
n
P
0
-0.5
-1
0
1
2
3
n=2
1
2
3
n=4
1
2
3
1
2
3
n=3
1
2
3
n=5
0
-0.2
-0.4
11
P
-0.6
-0.8
0
1
0
3
1
P
-1
-2
0
2
0
5
1
P
-2
1
2
n=3, m=1
1
2
n=5, m=1
3
3
1
0.5
12 0
P
-0.5
-1
0
2
1
4
1
P
0
-1
-2
0
2
6
1
P
0
-2
1
2
n=4, m=1
1
2
n=6, m=1
3
3
1
2
3
0
1
2
3
0
1
2
3
Sphe... | https://ocw.mit.edu/courses/2-067-advanced-structural-dynamics-and-acoustics-13-811-spring-2004/66bcdda53876d6adf82f7c9796279ca8_lect_7_31.pdf |
2 1 )
2
r
]u (r) = 0
n
Standing wave solutions
Traveling wave solutions
xj
)(
n
xy
)(
n
1
2
π
⎞
⎛
≡
⎟
⎜
2
x
⎠
⎝
1
2
π
⎞
⎛
≡
⎟
⎜
2
x
⎠
⎝
J
n
x
)(
+
2 1
OR
Y
n
x
)(
+
2 1
(1) (x) ≡ j (x) + iy (x) =
h
n
n
n
(2) (x) ≡ j (x) − iy (x) =
h
n
n
n
1
2
π
⎞
⎟
2
x
⎠
1
2
π
⎞
⎟
2
x... | https://ocw.mit.edu/courses/2-067-advanced-structural-dynamics-and-acoustics-13-811-spring-2004/66bcdda53876d6adf82f7c9796279ca8_lect_7_31.pdf |
(
n
y
-2
-3
-4
-5
0
n=0
n=1
n=2
n=3
2
4
6
8
x
10
12
14
16
Asymptotic forms for Bessel functions
In odd numbers of dimensions, we are able to express Bessel functions
exactly via trigonometric expansions.
Correspondence between planar and cylindrical expansions
( x
k P
, z
, k y , z ) ↔ P ( k
... | https://ocw.mit.edu/courses/2-067-advanced-structural-dynamics-and-acoustics-13-811-spring-2004/66bcdda53876d6adf82f7c9796279ca8_lect_7_31.pdf |
�
,φ , z,ω ) = ∑ einφ 1 ∞
2π ∞ −
n=−∞
∫ Cn (k ,ω ) e z J (
z
n
ik
z
r k ) dk
r
z
Need one measurement surface for each unknown coefficient function.
Review
• Plane wave expansions etc
• Plane wave solutions useful in Cartesian geometries.
Motivation
• To treat propagation and scattering problems involving s... | https://ocw.mit.edu/courses/2-067-advanced-structural-dynamics-and-acoustics-13-811-spring-2004/66bcdda53876d6adf82f7c9796279ca8_lect_7_31.pdf |
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Fall 2013
9/23/2013
Brownian motion. Introduction
6.265/15.070J
Lecture 6
Content.
1. A heuristic construction of a Brownian motion from a random walk.
2. Definition and basic properties of a Brownian motion.
1 Historical notes
• 1765 Jan Ingenhousz observations of carbon... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
For every constant a
(cid:80)
lim P
(
n→∞
X
n
≤ ≤
1 i
√
σ n
i − µn
≤
a
) =
a
2
− t
e 2 dt.
1
√
−∞ 2π
(cid:80)
≤
1 i n X
≤
≤
(cid:80)
Now let us look at a sequence of partial sums Sn = 1 i≤n(Xi − µ). For
simplicity assume µ = 0 so that we look at Sn =
we say
anything about Sn as a function of n? In fact... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
n(t1) =
o
X
i
1
1≤ i≤
√
n
t
n
and Bn(t2) − Bn(t
1) =
nt
nt <i1 ≤ √
n
2
. The two sums contain different elements of the sequence
X1, X2, . . .. Since the sequence is i.i.d. Bn(t1) and Bn(t2) − Bn(t1) are
independent. Namely for every x1, x2
P(Bn(t1) ≤ x1, Bn(t2) − Bn(t1) ≤ x2) = P(Bn(t1) ≤ x1)P(Bn(t2) − Bn(t1)... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
of denoting Brownian motion by B and its value at time t
by B(t).
Definition 1 (Wiener measure). Given Ω = C[0, ∞), Borel σ-field B defined
on C[0, ∞) and any value σ > 0, a probability measure P satisfying the follow
ing properties is called the Wiener measure:
1. P(B(0) = 0) = 1.
2. P has the independent increment... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
be explicitly writing samples ω when discussing
Brownian motion. Also when we say B(t) is a Brownian motion, we un
derstand it both as a Wiener measure or simply a sample of it, depending
on the context. There should be no confusion.
• It turns out that for any given σ such a probability measure is unique. On
the ... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
over all values a ∈ R. B(t) denotes the standard Brownian
motion. Prove that P(B ∈ AR) = 0.
4 Properties
We now derive several properties of a Brownian motion. We assume that B(t)
is a standard Brownian motion.
Joint distribution. Fix 0 < t1 < t2 < · · · < tk. Let us find the joint distribution
of the random vecto... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
well
as the Gaussian distribution of the increments, follow immediately. The variance
of the increments cB(t2) − cB(t1) is c2(t2 − t1).
c > 0, B( t ) is a Brownian motion with variance 1
c . In-
For every positive
deed, the process is continuous. The increments are stationary, independent with
Gaussian distributio... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
B(1)(t)
is
also a standard Brownian motion.
B (t) = tB( ) for all t > 0 and B(1)(0) = 0
1
t
(1)
t
Proof. We need to verify properties (a)-(c) of Definition 1 plus continuity. The
continuity at any point t > 0 follows immediately since 1/t is continuous func
tB( 1 ) is continuous for all t > 0
tion. B is continuo... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
iance
t
s is zero mean Gaussian with
s2 1
( − ) + (t − s)2( ) = t − s.
s
1
t
1
t
This proves (c).
(1)
− B(1)(t2),
We now return to (b). Take any t1 < t2 < t3. We established in (c) that all
the differences B(1)(t2) − B(1)(t1), B(1)(t3)
(1)(t3) − B(1)(t1) =
B (t3) − B(1)(t2) + B(1)(t2) − B(1)(t1) are zero me... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
( , ω) = 0
}
1
t
t
→0
is equal to unity.
t
We will use Strong Law of Large Numbers (SLLN). First set t = 1/n
and consider tB( 1 ) = B
(n)/n. Because of the independent Gaussian incre
1 i n(B(i) − B(i − 1)) is the sum of independent
ments property B(n) =
≤ ≤
variables. By SLLN we have then B(n)/n →
i.i.d. stand... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
(2)
where i.o. stands for infinitely often. Suppose (2) was indeed the case. The
equality means that for almost all samples ω the inequality Zn(ω)/n > E hap
pens for at most finitely many n. This means exactly that for almost all ω (that
is a.s.) Zn(ω)/n → 0 as n → ∞. Combining with (1) we would conclude that
a.s.
... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
the sum on the left-hand side is finite. Now we use the Borel-Cantelli
Lemma to conclude that (2) indeed holds.
5 Additional reading materials
• Sections 6.1 and 6.4 from Chapter 6 of Resnick’s book ”Adventures in
Stochastic Processes” in the course packet.
• Durrett [2], Section 7.1
• Billingsley [1], Chapter 8. ... | https://ocw.mit.edu/courses/15-070j-advanced-stochastic-processes-fall-2013/6774cb366bae388320c0c8a251ee8daf_MIT15_070JF13_Lec6.pdf |
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
Fall 2018
CONTINUOUS RANDOM VARIABLES - II
6.436J/15.085J
Lecture 11
Contents
1. Review of joint distributions
2. From conditional distribution to joint (Markov kernels)
3. From joint to conditional (disintegration)
4. Example: The bivariate normal distribution
5. Conditional... | https://ocw.mit.edu/courses/6-436j-fundamentals-of-probability-fall-2018/67826d2ee5082d4a4b732b4dd73c0895_MIT6_436JF18_lec11.pdf |
Various lecture notes for 18385.
R. R. Rosales.
Department of Mathematics,
Massachusetts Institute of Technology,
Cambridge, Massachusetts, MA 02139.
September 17, 2012
Abstract
Notes, both complete and/or incomplete, for MIT’s “18.385 Nonlinear Dynamics and Chaos”.
Contents
1 Lecture Notes Fall 2012.
1.1 Lecture # 01,... | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/67b0710b3bc473a75d8f91b29954b0e9_MIT18_385JF14_SelectedLec.pdf |
. . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
2
2
2
3
4
5
5
6
6
7
7
7
7
8
8
8
Existence and uniqueness for the I.V. problem for ode.
Lipschitz continuity.
Example of an... | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/67b0710b3bc473a75d8f91b29954b0e9_MIT18_385JF14_SelectedLec.pdf |
= set of all the complex valued functions defined on Rd, with unit square integral =
unit sphere in L2(Rd).
When “fully describing” a system the aim is not to necessarily describe the “full” system, but
whatever is “relevant” (e.g.: in item 2 we ignore the color of the pendulum, the air motion around
it, etc). We also m... | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/67b0710b3bc473a75d8f91b29954b0e9_MIT18_385JF14_SelectedLec.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.