text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
(x, Result(Grab, s))
• Frame Axioms: what doesn’t change
Lecture 10 • 19
It's not enough to just specify these positive effects of taking actions. You also have
to describe what doesn't change. The problem of having to specify all the things
that don’t change is called the "frame problem." I think the name comes from
the frames in movies. Animators know that most of the time, from frame to
frame, things don't change.
And when we write down a theory we're saying, well, if there's something here, and
you do a grab, then you're going to be holding the thing. But, we haven’t said,
for instance, what happens to the colors of the objects, or whether your shoes are
tied, or the weather, as a result of doing the grab action. But, you have to
actually say that. At least, in the situation calculus formulation, you have to say,
"and nothing else changes" somehow.
19
Situation Calculus
• Reify situations: [reify = name, treat them as objects] and
use them as predicate arguments.
• At(Robot, Room6, S9) where S9 refers to a particular situation
• Result function: a function that describes the new situation
resulting from taking an action in another situation.
• Result(MoveNorth, S1) = S6
• Effect Axioms: what is the effect of taking an action in the
world
•
•
∀
∀
¬
x.s. Present(x,s) Æ Portable(x)
x.s.
→
Holding(x, Result(Drop, s))
Holding(x, Result(Grab, s))
• Frame Axioms: what doesn’t change
x.s. color(x,s) = color(x, Result(Grab, s))
•
∀
Lecture 10 • 20
One way to do this, is to write frame axioms, where you say things like, "for all
objects and situations, the color of X in situation S is the same as the color of X
in the situation that is the result of doing grab. That is to say, picking things up
doesn't change their color. Picking up other things doesn't change their color. | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
the result of doing grab. That is to say, picking things up
doesn't change their color. Picking up other things doesn't change their color.
But it can be awfully tedious to have to write all these out.
20
Situation Calculus
• Reify situations: [reify = name, treat them as objects] and
use them as predicate arguments.
• At(Robot, Room6, S9) where S9 refers to a particular situation
• Result function: a function that describes the new situation
resulting from taking an action in another situation.
• Result(MoveNorth, S1) = S6
• Effect Axioms: what is the effect of taking an action in the
world
•
•
∀
∀
¬
x.s. Present(x,s) Æ Portable(x)
x.s.
→
Holding(x, Result(Drop, s))
Holding(x, Result(Grab, s))
• Frame Axioms: what doesn’t change
x.s. color(x,s) = color(x, Result(Grab, s))
•
∀
• Can be included in Effect axioms
Lecture 10 • 21
It turns out that there's a solution to the problem of having to talk about all the
actions and what doesn’t change when you do them. You can read it in the book
where they're talking about successor-state axioms. You can, in effect, say what
the conditions are under which an object changes color, for example, and then
say that these are the only conditions under which it changes color.
21
Planning in Situation Calculus
• Use theorem proving to find a plan
Lecture 10 • 22
Assuming you write all of these axioms down that describe the dynamics of the
world. They say how the world changes as you take actions. Then, how can you
use theorem proving to find your answer?
22
Planning in Situation Calculus
• Use theorem proving to find a plan
• Goal state:
s. At(Home, s) Æ Holding(Gold, s)
∃
Lecture 10 • 23
You can write the goal down somehow as the logical statement. For instance, you
could say, well, I want to find a situation in which the agent | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
You can write the goal down somehow as the logical statement. For instance, you
could say, well, I want to find a situation in which the agent is at home and is
holding gold. But I don’t want to just prove that such a situation exists; I
actually want to find it.
23
Planning in Situation Calculus
• Use theorem proving to find a plan
• Goal state:
∃
• Initial state: At(Home, s0) Æ
s. At(Home, s) Æ Holding(Gold, s)
Holding(Gold, s0) Æ
¬
Holding(Rope, s0) …
Lecture 10 • 24
You would encode your knowledge about the initial situation as some logical
statements about some particular situation. We’ll name it with the constant s0.
We might say that the agent is at home in s0 and it’s not holding gold, but it is
holding rope, and so on.
24
Planning in Situation Calculus
• Use theorem proving to find a plan
• Goal state:
∃
• Initial state: At(Home, s0) Æ
s. At(Home, s) Æ Holding(Gold, s)
Holding(Gold, s0) Æ
¬
Holding(Rope, s0) …
• Plan: Result(North, Result(Grab, Result(South, s0)))
• A situation that satisfies the requirements
• We can read out of the construction of that situation what
the actions should be.
• First, move South, then Grab and then move North.
Lecture 10 • 25
Then, we could invoke a theorem prover on the initial state description, the effects
axioms, and the goal description. It turns out that if you use resolution
refutation to prove an existential statement, it’s usually easy to extract a
description of the particular s that the theorem prover constructed to prove the
theorem (you can do it by keeping track of the substitutions that get made for s
in the process of doing the proof).
So, for instance, in this example, we might find that the theorem prover found the
goal conditions to be satisfied in the state that is the result of starting in situation
s0, first going south, then doing a grab | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
ver found the
goal conditions to be satisfied in the state that is the result of starting in situation
s0, first going south, then doing a grab action, then going north. We can read
the plan right out of the term that names the goal situation.
25
Planning in Situation Calculus
• Use theorem proving to find a plan
• Goal state:
∃
• Initial state: At(Home, s0) Æ
s. At(Home, s) Æ Holding(Gold, s)
Holding(Gold, s0) Æ
¬
Holding(Rope, s0) …
• Plan: Result(North, Result(Grab, Result(South, s0)))
• A situation that satisfies the requirements
• We can read out of the construction of that situation what
the actions should be.
• First, move South, then Grab and then move North.
Lecture 10 • 26
Note that we’re effectively planning for a potentially huge class of initial states at
once. We’ve proved that, no matter what state we start in, as long as it satisfies
the initial state conditions, then following this sequence of actions will result in
a new state that satisfies the goal conditions.
26
Special Properties of Planning
• Reducing specific planning problem to general
problem of theorem proving is not efficient.
Lecture 10 • 27
Way back when I was more naive than I am now, when I taught my first AI course,
I tried to get my students to do this, to write down all these axioms for the
Wumpus world, and to try to use a theorem prover to derive plans. It turned out
that it was really good at deriving plans of Length 1, and sort of OK at deriving
plans of Length 2, and, then, after that, forget it--it took absolutely forever. And
it was not a problem with my students or the theorem prover, but rather with the
whole enterprise of using the theorem prover to do this job.
So, there's sort of a general lesson, which is that even if you have a very general
solution, applying it to a particular problem is often very inefficient, because
you are not taking advantage of special properties of the particular problem that
might make it easier to solve. Usually, the fact that you have a particular | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
you are not taking advantage of special properties of the particular problem that
might make it easier to solve. Usually, the fact that you have a particular
specific problem to start with gives you some leverage, some insight, some
handle on solving that problem more directly and more efficiently. And that's
going to be true for planning.
27
Special Properties of Planning
• Reducing specific planning problem to general
problem of theorem proving is not efficient.
• We will be build a more specialized approach that
exploits special properties of planning problems.
Lecture 10 • 28
We're going to use a very restricted kind of logical representation in building a
planner, and we're going to take advantage of some special properties of the
planning problem to do it fairly efficiently. So, what special properties of
planning can we take advantage of?
28
Special Properties of Planning
• Reducing specific planning problem to general
problem of theorem proving is not efficient.
• We will be build a more specialized approach that
exploits special properties of planning problems.
• Connect action descriptions and state
descriptions [focus searching]
Lecture 10 • 29
One thing we can do is make explicit the connection between action descriptions
and state descriptions. If I had an axiom that said "As a result of doing the grab
action, we can make holding true." and we were trying to satisfy a goal
statement that had “holding” as part of it, then it would make sense to consider
doing the “grab” action. So, rather than just trying all the actions and searching
blindly through the search space, you could notice, "Goodness, if I need holding
to be true at the end, then let me try to find an action that would make it true."
So, you can see these connections, and that can really focus your searching.
29
Special Properties of Planning
• Reducing specific planning problem to general
problem of theorem proving is not efficient.
• We will be build a more specialized approach that
exploits special properties of planning problems.
• Connect action descriptions and state
descriptions [focus searching]
• Add actions to a plan in any order
Lecture 10 • 30
Another thing to notice, that's going to be really important, is that you can add
actions to your plan in any order. Maybe you're trying to think about how | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
notice, that's going to be really important, is that you can add
actions to your plan in any order. Maybe you're trying to think about how best to
go to Tahiti for spring break; you might think about which airline flight to take
first. That maybe seems like the most important thing. You might think about
that and really nail it down before you figure out how you're going to get to the
airport or what hotel you're going to stay in or a bunch of other things like that.
You wouldn't want to have to make your plan for going to Tahiti actually in the
order that you're going to execute it, necessarily. Right? If you did that, you
might have to consider all the different taxis you could ride in, and that would
take you a long time, and then for each taxi, you think about, "Well, then how
do I...." You don't want to do that. So, it can be easier to work out a plan from
the middle out sometimes, or in different directions, depending on the
constraints.
30
Special Properties of Planning
• Reducing specific planning problem to general
problem of theorem proving is not efficient.
• We will be build a more specialized approach that
exploits special properties of planning problems.
• Connect action descriptions and state
descriptions [focus searching]
• Add actions to a plan in any order
• Sub-problem independence
Lecture 10 • 31
Another thing that we sometimes can take advantage of is sub-problem
independence. If I have the goal to be at home this evening with a quart of milk
and the dry cleaning, I can try to solve the quart of milk part and the dry
cleaning part roughly independently and put those solutions together. So, to the
extent that we can decompose a problem and solve the individual problems and
stitch the solutions back together, we can often get a more efficient planning
process. So, we're going to try take advantage of those things in the construction
of a planning algorithm.
31
Special Properties of Planning
• Reducing specific planning problem to general
problem of theorem proving is not efficient.
• We will be build a more specialized approach that
exploits special properties of planning problems.
• Connect action descriptions and state
descriptions [focus searching]
• Add actions to a plan in any order
• Sub-problem independence | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
.
• Connect action descriptions and state
descriptions [focus searching]
• Add actions to a plan in any order
• Sub-problem independence
• Restrict language for describing goals, states
and actions
Lecture 10 • 32
We're also going to scale back the kinds of things that we're able to say about the
world. Especially for right now, just to get a very concrete planning algorithm.
We'll be very restrictive in the language that we can use to talk about goals and
states and actions and so on. We'll be able to relax some of those restrictions
later on.
32
STRIPS representations
Lecture 10 • 33
Now we're going to talk about something called STRIPS representation. STRIPS is
the name of the first real planning system that anybody built. It was made in a
place called SRI, which used to stand for the Stanford Research Institute, so
STRIPS stands for the Stanford Research Institute Problem Solver or something
like that.
Anyway, they had a robot named Shakey, a real, actual robot that went around from
room to room, and push boxes from here to there. I should say, it could do this
in a very plain environment with nothing in the way. They built this planner, so
that Shakey could figure out how to achieve goals by going from room to room,
or pushing boxes around. It worked on a very old computer running in hardly
any memory, so they had to make it as efficient as possible.
So, we're going to look at a class of planners that has essentially grown out from
STRIPS. The algorithms have changed, but the basic representational scheme
has been found to be quite useful.
33
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r3) Æ Closed(door6) Æ …
Lecture 10 • 34
So, we can represent a state of the world as a conjunction of ground literals.
Remember, "ground" means there are no variables. And a "literal" can be either
positive or negative. So, you could say, "In robot Room3" and "closed door6"
and so on. That would be a description of the state of the world. Things that are
not mentioned here are assumed to be unknown. You get some literals, and the
con | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
be a description of the state of the world. Things that are
not mentioned here are assumed to be unknown. You get some literals, and the
conjunction of all those literals taken together describes the set of possible
worlds; a set of possible configurations of the world. But we don't say whether
Door 1 is open or closed, and so that means Door 1 could be either. And when
we make a plan, it has to be a plan that would work in either case.
34
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r3) Æ Closed(door6) Æ …
• Goals: conjunctions of literals
• (implicit
r) In(Robot, r) Æ In(Charger, r)
∃
Lecture 10 • 35
The goal is also a conjunction of literals, but they're allowed to have variables. You
could say something like "I want to be in a room where the charger is." That
could be your goal. You could say it logically by “In(Robot, r) and In(Charger,
r)”.
And maybe you'd make that true by going to the room where the charger is or
maybe you'd make that true by moving the charger into some other room. It's
implicitly, existentially quantified; we just have to find some r that makes it
true.
35
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r3) Æ Closed(door6) Æ …
• Goals: conjunctions of literals
• (implicit
r) In(Robot, r) Æ In(Charger, r)
∃
• Actions (operators)
• Name (implicit
):
∀
Go(here, the
re)
Lecture 10 • 36
Actions are also sometimes also called operators. They have a name, and sometimes
it's parameterized with variables that are implicitly universally quantified. So,
we might have the operator Go with parameters here and there. This operator
will tell us how to go from here to there for any values of here and there that
satisfy the preconditions.
36
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
that
satisfy the preconditions.
36
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r3) Æ Closed(door6) Æ …
• Goals: conjunctions of literals
• (implicit
r) In(Robot, r) Æ In(Charger, r)
∃
• Actions (operators)
• Name (implicit
):
∀
Go(here, the
re)
• Preconditions: conjunction of literals
– At(here) Æ path(here, there)
Lecture 10 • 37
The preconditions are also a conjunction of literals. A precondition is something
that has to be true in order for the operator to be successfully applicable. So, in
order to go from here to there, you have to be at here, and there has to be a path
from here to there.
37
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r3) Æ Closed(door6) Æ …
• Goals: conjunctions of literals
• (implicit
r) In(Robot, r) Æ In(Charger, r)
∃
• Actions (operators)
• Name (implicit
):
∀
Go(here, the
re)
• Preconditions: conjunction of literals
– At(here) Æ path(here, there)
• Effects: conjunctions of literals [also known as post-
conditions, add-list, delete-list]
– At(there) Æ
At(here)
¬
Lecture 10 • 38
And then you have the "effect" which is a conjunction of literals. In our example,
the effects would be that you’re at there and not at here. The effects say what's
true now is a result of having done this action. The effects are sometimes
known as “post-conditions”. In the original STRIPS, the positive effects were
known as the add-list and the negative effects as the delete list (and the
preconditions and goals could only be positive).
Now, since we're not working in a very general, logical framework, we’re using the
logic in a very special way, we don't have to explicitly, have arguments that
describe the situation, because, implicitly, | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
, we’re using the
logic in a very special way, we don't have to explicitly, have arguments that
describe the situation, because, implicitly, an operator description says, "If these
things are true, and I do this action, then these things will be true in the resulting
situation." But the idea of a situation as a changing thing is really built into the
algorithm that we're going to use in the planner.
38
STRIPS representations
• States: conjunctions of ground literals
• In(robot, r3) Æ Closed(door6) Æ …
• Goals: conjunctions of literals
• (implicit
r) In(Robot, r) Æ In(Charger, r)
∃
• Actions (operators)
• Name (implicit
):
Go(here, the
re)
• Preconditions: conjunction of literals
– At(here) Æ path(here, there)
∀
¬
• Effects: conjunctions of literals [also known as post-
conditions, add-list, delete-list]
– At(there) Æ
At(here)
• Assumes no inference in relating predicates (only equality)
Lecture 10 • 39
Now they built this system when, as I say, all they had was an incredibly small and
slow computer, and so they absolutely had to just make it as easy as they could
for the system. As time has progressed and computers have gotten fancier,
people have added more richness back into the planners. Typically, you don't
have to go to quite this extreme. They can do a little bit of inference about how
predicates are related to each other. And then you might be allowed to have,
say, "not(open)" here as the precondition, rather than "closed." But, in the
original version, there was no inference at all. All you had were these operator
descriptions, and whatever they said to add or delete, you added and deleted,
and that was it. And to test if the pre-conditions were satisfied, there was no
theorem proving involved; you would just look the preconditions up in your
database and see if they were there or not. STRIPS is at one extreme of
simplicity in inference, and situation calculus is at the other, complex extreme.
Now, people are building systems that live | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
is at one extreme of
simplicity in inference, and situation calculus is at the other, complex extreme.
Now, people are building systems that live somewhere in the middle. But it's
useful to see what the simple case is like.
39
Strips Example
Let's talk about a particular planning problem in STRIPS representation. It involves
doing some errands.
Lecture 10 • 40
40
Strips Example
• Action
There are two actions.
Lecture 10 • 41
41
Strips Example
• Action
• Buy(x, store)
– Pre: At(store), Sells(store, x)
– Eff: Have(x)
Lecture 10 • 42
The “buy” action takes two arguments, the product, x, to be bought and the store.
The preconditions are that we are at the store, and that the store sells the desired
product. The effect is that we have x. Nothing else changes. We assume that
we’re still at the store, and that, at least in this problem, we haven’t bought the
last item, so that the store still sells x.
There's a nice problem in the book about extending this domain to deal with the
case where you have money and how you would add money in as a pre-
condition and a post- condition of the buy operator. It gets a little bit
complicated because you have to have amounts of money and then money
usually declines, and you go to the bank and so on.
42
Strips Example
• Action
• Buy(x, store)
– Pre: At(store), Sells(store, x)
– Eff: Have(x)
• Go(x, y)
– Pre: At(x)
– Eff: At(y),
¬
At(x)
Lecture 10 • 43
The “go” action is just like the one we had before, except we’ll leave off the
requirement that there be a path between x and y. We’ll assume that you can get
from anywhere to anywhere else.
43
Strips Example
• Action
• Buy(x, store)
– Pre: At(store), Sells(store, x)
– Eff: Have(x)
• Go(x, y)
– Pre: At(x)
– Eff: | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
), Sells(store, x)
– Eff: Have(x)
• Go(x, y)
– Pre: At(x)
– Eff: At(y),
¬
At(x)
• Goal
• Have(Milk) Æ Have(Banana) Æ Have(Drill)
OK, now let's imagine that your goal is to have milk and have bananas and have a
variable speed drill.
Lecture 10 • 44
44
Strips Example
• Action
• Buy(x, store)
– Pre: At(store), Sells(store, x)
– Eff: Have(x)
• Go(x, y)
– Pre: At(x)
– Eff: At(y),
¬
At(x)
• Goal
• Have(Milk) Æ Have(Banana) Æ Have(Drill)
• Start
• At(Home) Æ Sells(SM, Milk) Æ Sells(SM, Banana) Æ
Sells(HW, Drill)
We’re going to start at home and knowing that the supermarket sells milk, and the
supermarket sells bananas, and the hardware store sells drills.
Lecture 10 • 45
45
Planning Algorithms
• Progression planners: consider the effect of all
possible actions in a given state.
At(Home)…
Go(SM)
Go(HW)
…
Lecture 10 • 46
Now, if we go back to thinking of planning, as problem solving, then you might say,
"All right, I'm going to start in this state." You’d right down the logical
description for the start state, which really stands for a set of states. Then, you
could consider applying each possible operator. But remember that our
operators are parameterized. So from Home, we could go to any possible
location in our domain. There might be a lot of places you could go, which
would generate a huge branching factor. And then you can buy stuff. Think of
all the things you could buy -- right? – there’s potentially an incredible number
of things you could buy. So, the branching factor, when you have these
operators with variables in them is just huge, and if you try to search forward,
you get into terrible trouble. Planners that search directly forward are called
" | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
in them is just huge, and if you try to search forward,
you get into terrible trouble. Planners that search directly forward are called
"progression planners."
46
Planning Algorithms
• Progression planners: consider the effect of all
possible actions in a given state.
At(Home)…
Go(SM)
Go(HW)
…
• Regression planners: to achieve a goal, what must
have been true in previous state.
Have(M) Æ Have(B) Æ Have(D)
Buy(M,store)
At(store) Æ Sells(store,M) Æ Have(B) Æ
Have(D)
Lecture 10 • 47
But, that doesn't work so well because you can’t take advantage of knowing where
you're trying to go. They're not very directed. So, you say, "OK, well, I need to
make my planner more directed. Let me drive backwards from the goal state,
rather than driving forward from the start state." So, let's think about that.
Rather than progression, we can do regression. So, now we're going to start
with the goal state. Our goal state is this: We have milk and we have bananas
and we have a drill. OK. So now we can do goal regression, which is sort of an
interesting idea. We're going to go backward. If we wanted to make this true in
the world and the last action we took before doing this was a particular action --
let's say it was "buy milk" -- then you could say, "All right, what would have to
have been true in the previous situation in order that "buy milk" would make
these things true in this situation?" So, if I want this to be true in the last step,
what had to have been true on the next to the last step, so that "buy milk" would
put me in that circumstance? "Well, we have to be at a store, and that store has
to sell milk. And we also have to already have bananas and a drill.”
47
Planning Algorithms
• Progression planners: consider the effect of all
possible actions in a given state.
At(Home)…
Go(SM)
Go(HW)
…
• Regression planners: to achieve a goal, what must
have been true in previous | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
)…
Go(SM)
Go(HW)
…
• Regression planners: to achieve a goal, what must
have been true in previous state.
Have(M) Æ Have(B) Æ Have(D)
Buy(M,store)
At(store) Æ Sells(store,M) Æ Have(B) Æ
Have(D)
Lecture 10 • 48
So, now you're left with this thing that you're trying to make true. And now it seems
a little bit harder -- right? -- because, again, you could say, "All right, well, what
can I do? I could put in a step for buying bananas. So, then, that's going to be
requiring me to be somewhere. Or, I could put in a step for buying the drill right
now. You guys know that putting the drill-buying step here probably isn't so
good because we can't buy it in the same place. But the planning algorithm
doesn't really know that yet. You could also kind of go crazy. If you pick this "at
store," you could go nuts because "store" could be anything. So, you could just
suddenly decide that you want to be anywhere in the world.
48
Planning Algorithms
• Progression planners: consider the effect of all
possible actions in a given state.
At(Home)…
Go(SM)
Go(HW)
…
• Regression planners: to achieve a goal, what must
have been true in previous state.
Have(M) Æ Have(B) Æ Have(D)
Buy(M,store)
At(store) Æ Sells(store,M) Æ Have(B) Æ
Have(D)
• Both have problem of lack of direction – what
action or goal to pursue next.
Lecture 10 • 49
But anyway, you're going to see that if you try to build the planner based on this
principle, that, again, you're going to get into trouble because it's hard to see, of
these things, what to do next -- how they ought to hook together. You can do it.
And you can do search just like usual, and you would try a whole sequence of
actions backwards until it didn't work, and then you'd go up and back-track and
try again. And so the same | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
a whole sequence of
actions backwards until it didn't work, and then you'd go up and back-track and
try again. And so the same search procedures that you know about would apply
here. But again, it feels like they're going to have to do a lot of searching, and
it's not very well directed.
49
Plan-Space Search
• Situation space – both progressive and regressive
planners plan in space of situations
Lecture 10 • 50
Progression and regression search in what we’ll call “situation space”. It’s kind of
like state space, but each node stands for a set of states. You're moving around
in a space of sets of states. Your operations are doing things in the world,
changing the set of states you’re in.
50
Plan-Space Search
• Situation space – both progressive and regressive
planners plan in space of situations
• Plan space – start with null plan and add steps to
plan until it achieves the goal
Lecture 10 • 51
A whole different way to think about planning is that you’re moving around in a
space of plans. Your operations are adding steps to your plan, changing the
plan. You don’t ever think explicitly about states or sets of states. The idea is
you start out with an empty plan, and then there are operations that you can do
to a plan, and you want to do operations to your plan until it's a satisfactory plan.
51
Plan-Space Search
• Situation space – both progressive and regressive
planners plan in space of situations
• Plan space – start with null plan and add steps to
plan until it achieves the goal
• Decouples planning order from execution order
Lecture 10 • 52
Now you can de- couple the order in which you do things to your plan from the
order in which the plan steps will eventually be executed in the world. You can
decouple planning order from execution order.
52
Plan-Space Search
• Situation space – both progressive and regressive
planners plan in space of situations
• Plan space – start with null plan and add steps to
plan until it achieves the goal
• Decouples planning order from execution order
• Least-commitment
– First think of what actions before thinking about what
order to do the actions
Lecture 10 • | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
order from execution order
• Least-commitment
– First think of what actions before thinking about what
order to do the actions
Lecture 10 • 53
Plan-space search also let us take what's typically called a "least commitment
approach." By "least commitment," what we mean is that we don’t decide on
particular details in our plan, like what store to go to for milk, until we are
forced to. This keeps us from making premature, uninformed choices that we
have to back out of later. You can say, "Well, I'm going to have to go
somewhere and get the drill. I'm going to have to go somewhere and get the
bananas. Can I figure that out and then think about what order they have to go
in? Maybe then I can think about which stores would be the right stores to go to
in order to do that." And so the idea is that you want to keep working on your
plan, but never commit to doing a particular thing unless you're forced to.
53
Plan-Space Search
• Situation space – both progressive and regressive
planners plan in space of situations
• Plan space – start with null plan and add steps to
plan until it achieves the goal
• Decouples planning order from execution order
• Least-commitment
– First think of what actions before thinking about what
order to do the actions
• Means-ends analysis
– Try to match the available means to the current ends
Lecture 10 • 54
Plan-space search also lets us do means-end analysis. That is to say, that you can
look at what the plan is trying to do, look at the means that you have available to
you, and try to match them together. Simon built an early planner that tried to
look at the goal state, and the current state, and find the biggest difference. So,
trying to fly from one place to another, you would say, "The operator that will
reduce the difference most between where I know I am and where I know I want
to be is the flying operator. So, let me put that one in first. And then, I still have
some other differences on either end. Like, there's a difference between being at
my house and being at the airport, and so I'll put in some operators to deal with
that. And there's | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
's a difference between being at
my house and being at the airport, and so I'll put in some operators to deal with
that. And there's another difference at the other end. I'll put those in." So, maybe
addressing the biggest difference first is a way to organize your planning in a
way that'll make it more effective. That's another idea at the higher level.
54
Partially Ordered Plan
A partially ordered plan (or PO plan, for short), is kind of a complicated object. It’s
made up of 4 parts.
Lecture 10 • 55
55
Partially Ordered Plan
• Set of steps (instance of an operator)
The first part is a set of steps. A step is an operator, perhaps with some of the
variables filled in with constants.
Lecture 10 • 56
56
Partially Ordered Plan
• Set of steps (instance of an operator)
• Set of ordering constraints Si < Sj
Lecture 10 • 57
The second part is a set of ordering constraints between steps. They say that step I
has to come before step j in the plan. This may just be a partial order (that is, it
doesn’t have to specify whether j comes before k or k comes before j), but it
does have to be consistent (so it can’t say that j comes before k and k comes
before j).
57
Partially Ordered Plan
• Set of steps (instance of an operator)
• Set of ordering constraints Si < Sj
• Set of variable binding constraints v=x
• v is a variable in a step; x is a constant or another variable
Lecture 10 • 58
Then there is a set of variable binding constraints. They have the form "Some
variable equals some value." V is a variable in one of the steps, and the X is a
constant or another variable. So, for example, if you're trying to make a cake,
and you have to have the eggs and the flour in the same bowl. You’d have an
operator that says, "Put the eggs in [something]” and another that says “Put the
flour in [something]." Let's say, you don't want to commit to which bowl you’re
going to use yet. Then, you might say | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
flour in [something]." Let's say, you don't want to commit to which bowl you’re
going to use yet. Then, you might say, "Well, whatever the bowl is that I put the
eggs into, it has to be the same as the bowl that I put the flour into, but it's not
yet Bowl32." So, that's the case where you would end up having a partial plan
where you had a variable constraint, that this variable equal that variable.
58
Partially Ordered Plan
• Set of steps (instance of an operator)
• Set of ordering constraints Si < Sj
• Set of variable binding constraints v=x
• v is a variable in a step; x is a constant or another variable
• Set of causal links Si �c Sj
• Step i achieves precondition c for step j
Lecture 10 • 59
And the last thing is really for internal bookkeeping, but it's pretty important. We
also have a set of causal links. A causal link might be "Step I achieved pre-
condition C for Step J." So, if I have to have money in order to buy the bananas,
then I might have a "go to the bank" action and a "buy bananas" action and I
would put, during the bookkeeping and planning, I would put this link in there
that says: The reason I have the go-to-the-bank action in here is because it
achieves the "have(money)" pre-condition that allows me to buy bananas.
So, the way that we do planning is: we add actions that we hope will achieve either
parts of our goal, or pre-conditions of our other actions. And to keep track of
what preconditions we have already taken care of, we put these links in so that
we can remember why we're doing things.
So, a plan is just this. It's the set of steps with these constraints and the bookkeeping
stuff.
59
Initial Plan
The way we initialize the planning process is to start with a plan that looks like this.
Lecture 10 • 60
60
Initial Plan
• Steps: {start, finish}
It has two steps: Start and Finish.
Lecture 10 • 61
61
Initial Plan
• Steps: {start, finish}
• Ordering | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
steps: Start and Finish.
Lecture 10 • 61
61
Initial Plan
• Steps: {start, finish}
• Ordering: {start < finish}
We constrain the start step to happen before the finish step.
Lecture 10 • 62
62
Initial Plan
• Steps: {start, finish}
• Ordering: {start < finish}
• start
• Pre: none
• Effects: start conditions
Lecture 10 • 63
Start is a special operator that has no pre-condition and it has as its effect the
starting conditions of the problem. So, in our "going to the supermarket"
example, the starting conditions of that problem were that we were at home and
we didn't have any milk and we didn't have the drill, and so on. In order to make
the planning process uniform, we just assume this starting step has happened, to
set things up.
63
Initial Plan
• Steps: {start, finish}
• Ordering: {start < finish}
• start
• Pre: none
• Effects: start conditions
• finish
• Pre: goal conditions
• Effects: none
And there's a special final action, "finish", that has as its pre-conditions the goal
condition, and it has no effect. So, this is our initial plan.
Lecture 10 • 64
64
Plan Completeness
Lecture 10 • 65
And now we're going to refine the plan by adding steps, ordering constraints,
variable binding constraints, and causal links until it's a satisfactory plan. And so
the question is, "Well, what is it that makes a plan satisfactory?" Here is a set of
formal conditions on a plan that makes it a solution, that makes it a complete,
correct plan. So, for a plan to be a solution, it has to be complete and consistent.
65
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
Lecture 10 • 66
So, let's talk about when a plan is complete. A plan is complete if every pre-
condition of every step is achieved by some other step.
So basically, if you look at our starting plan there, we see that there's a big list of
pre-conditions for the final step, and so | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
So basically, if you look at our starting plan there, we see that there's a big list of
pre-conditions for the final step, and so what we're going to have to do is
somehow add enough other steps to achieve all the pre-conditions of the final
step, as well as any of the pre-conditions of those other steps that we've added.
66
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff
So, then, let's understand what "achieve" means.
Lecture 10 • 67
67
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff
• Si < Sj
For Step I to achieve C for step J, SI has to come before SJ, right? It's part of the
notion of achievement,
Lecture 10 • 68
68
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff
• Si < Sj
• c
∈
effects(Si)
And C has to be a member of the effects of SI.
Lecture 10 • 69
69
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff
• Si < Sj
• c
∈
•
∃
effects(Si)
Sk.
c
¬
consistent with the ordering constraints
∈
¬
effects(Sk) and Si < Sk < Sj is
Lecture 10 • 70
What else? Is there anything else that you imagine we might need here? What if
going to the bank step achieves "have money" for "buy milk." If we go to the
bank before we buy the milk, and going to the bank achieves having money –
there’s still something that could go wrong. You could | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
to the
bank before we buy the milk, and going to the bank achieves having money –
there’s still something that could go wrong. You could go to the bank. Between
going to the bank and buying milk, all kinds of things could happen. Right? You
could be robbed, or you could spend it all on something more fun or who knows
what? So, you also have to say "And there's no intervening event that messes
things up." "There's no Sk such that, not C is in the effects of Sk." "And SI <
SK < Sj is consistent with the ordering constraint." So, there’s no step that can
mess up our effect that could possible intervene between when we achieve it
with si and when we need it at sj.
70
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff
• Si < Sj
• c
∈
•
∃
effects(Si)
Sk.
c
¬
consistent with the ordering constraints
¬
∈
effects(Sk) and Si < Sk < Sj is
Lecture 10 • 71
The idea is that these ordering constraints don't necessarily specify a total order on
the plan steps. So, it might be that the plan is very laissez-faire. It's says, "Oh, go
to the bank sometime. Buy a mink coat sometime. Buy milk sometime." And so
it doesn't say that buying the mink coat is between the bank and the milk, but
buying the mink coat could be between the bank and the milk. And that's
enough for that to be a violation. So, if we have a way of achieving every
condition so that the thing that's doing the achieving is constrained to happen
before we need it -- we get the money before we buy the milk -- it does achieve
the effect that we need and there's nothing that comes in between to clobber it,
then our plan is good.
71
Plan Completeness
• A plan is complete iff every precondition of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
of every
step is achieved by some other step.
• Si �c Sj (“step I achieves c for step j”) iff
• Si < Sj
• c
∈
•
∃
effects(Si)
Sk.
c
¬
consistent with the ordering constraints
¬
∈
effects(Sk) and Si < Sk < Sj is
• A plan is consistent iff the ordering constraints are
consistent and the variable binding constraints are
consistent.
Lecture 10 • 72
For a plan to be consistent, it is enough for the temporal ordering constraints to be
consistent (we can’t have I before j and j before I) and for the variable binding
constraints to be consistent (we can’t require two constants to be equal).
72
PO Plan Example
Lecture 10 • 73
Let us just spend a few minutes doing an example plan for the milk case, very
informally, and next time we'll go through the algorithm and do it again, more
formally.
73
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
Have(M) Have(B)
finish
Lecture 10 • 74
Here’s our initial plan. With the start and finish steps. We’ll draw ordering
constraints using dashed red lines. We put the effects of a step below it, and the
preconditions of a step above it. We’ll do a simplified version of the whole
problem, deleting the requirement for a drill, and just having as our goal that we
have milk and bananas.
74
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
Buy (M,x1)
Have(M)
Have(M) Have(B)
finish
Lecture 10 • 75
It doesn’t seem useful to add any further constraints at this point, so let’s buy milk.
We’ll start by adding a step that says we’re going to by milk at some place
called x1. It has preconditions at(x1) and sells(x1, milk); and it has the effect
have(milk).
75
PO Plan Example
start
Sells(SM, M) S | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
, milk); and it has the effect
have(milk).
75
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
Buy (M,x1)
Have(M)
Have(M) Have(B)
finish
Now, we can add a blue causal link that says we are going to use this effect of
have(Milk) to satisfy the precondition have(Milk) in the finish step.
Lecture 10 • 76
76
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
Buy (M,x1)
Have(M)
Have(M) Have(B)
finish
Lecture 10 • 77
Every causal link also implies an ordering link, so we’ll add an ordering link as well
between this step and finish. And we should also require that this step happen
after start.
77
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Now, let's buy bananas. We add a step to buy bananas at location x2, including its
preconditions and effects.
Lecture 10 • 78
78
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Now we add a causal link and temporal constraints, just as we did for the buy milk
step.
Lecture 10 • 79
79
PO Plan Example
x1 = SM
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
) At(H)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Lecture 10 • 80
Now, a relatively straightforward thing to do is satisfy sells(x1,Milk) by
constraining x1 to be the supermarket. We add a variable binding constraint,
saying that x1 is equal to the supermarket. And that allows us to put a causal
link between Sells(SM,M) in the effects of start, and the precondition here on
buy.
80
PO Plan Example
x1 = SM
x2 = SM
start
Sells(SM, M) Sells(SM,B) At(H)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Similarly, we can satisfy sells(x2,Bananas) by adding a variable binding constraint
that x2 must be the supermarket, and adding a causal link.
Lecture 10 • 81
81
PO Plan Example
x1 = SM
x2 = SM
start
Sells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Lecture 10 • 82
Now, the only preconditions that remain unsatisfied are at(x1) and at(x2). Since x1
and x2 are both constrained to be the supermarket, it seems like we should add a
step to go to the supermarket.
82
PO Plan Example
x1 = SM
x2 = SM
start
Sells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
ells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
The effect of at(SM) can be used to satisfy both preconditions, so we add causal
links to the at preconditions and temporal links to the buy actions.
Lecture 10 • 83
83
PO Plan Example
x1 = SM
x2 = SM
start
Sells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
And we add another temporal constraint to force this step to come after start.
Lecture 10 • 84
84
x1 = SM
x2 = SM
x3 = H
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Now, the At(x3) precondition can be satisfied by adding a variable binding
constraint to force x3 to be home.
Lecture 10 • 85
85
x1 = SM
x2 = SM
x3 = H
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
We can add a causal link from the At(home) effect of start to this precondition, and
we’re done!
Lecture 10 • 86
86
x1 = SM
x2 = SM
x3 = H
PO Plan Example
start
Sells(SM, M) Sells(SM,B) At(H)
At(x3)
GO (x3,SM)
At(SM)
At(x1) Sells(x1,M)
At(x2) Sells(x2, B)
Buy (M,x1)
Have(M)
Buy (B,x2)
Have(B)
Have(M) Have(B)
finish
Lecture 10 • 87
If you look at this plan, you find that it has in fact followed the “least commitment”
strategy and left open the question of what order to buy the milk and bananas in.
If the plan is complete, according to the definition we saw above, but the temporal
constraints do not represent a partial order, then any total order of the steps that
is consistent with the steps will be a correct plan that satisfies all the goals.
Next time, we’ll write down the algorithm for doing partial order planning and go
through a couple of examples.
87 | https://ocw.mit.edu/courses/6-825-techniques-in-artificial-intelligence-sma-5504-fall-2002/1184a975225bdbab3e3d215bf173bde1_Lecture10FinalPart1.pdf |
3.46 PHOTONIC MATERIALS AND DEVICES
Lecture 3: System Design: Time and Wavelength Division Multiplexing
Lecture
DWDM Components
CATV → 1 amp/1000 homes
Fiber (α = 0.2 dB/km)
core: d = 8 μm
clad: d = 125 μm
Δn = 0.5%
Dispersion control: D↓
Δn
⇒
o
r
o
r
Amplifier (EDFA)
SiO2 : Er (100 ppm)
More BW
Al2O3 doping → Al3+ ⇒ more Er3+
w/o
clustering
+Sb-doping → more BW
+Yb → pumping efficiency
L = 20-60 meters
Amplification = 103-104
BW: 12 nm → 16 ch
35 nm → 80 ch
80 nm → 200 ch
Waveguide amplifier Δ = 10%
Higher Δ than fiber : higher pump rate
System
Dispersion compensation
Broader spectrum → D fiber (opposite
dispersion filter)
40 Gb/s ⇒ Dmax = 63 ps/nm
10 Gb/s ⇒ Dmax = 103 ps/nm
3.46 Photonic Materials and Devices
Prof. Lionel C. Kimerling
Lecture 3: System Design
Page 1 of 4
Lecture
B
2 ↑ DL ≈ 105 ps/nm i (Gb/s)2
⇒ B ↑ by 2×, D ↓ by 4×
Filters (MUX, deMUX, add/drop)
Mach. Zehnder interferometer
T (delay) = L ng/C0
1 dB
system
penalty
ng = group index
H( )
ν =
1 ⎡
2 ⎣
⎢1− e
− πν ⎤
j2 T
⎥⎦
Fabry-Perot interferometer
H( )
ν = C
n
e−
j 2πν −Φ )
(
n
n
FSR =
1 C0 | https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/1190b2ecfa7f8d9a1da501aff52c7a1f_3_46l3_sysdesign.pdf |
ν = C
n
e−
j 2πν −Φ )
(
n
n
FSR =
1 C0
=
nL
T
T = 100 ps ⇒ FSR = 10 GHz
FSR = free spectral range
Tx
#1D
Tx
#2
Tx
#N
Terminal A
,λ λ
...,
2
1D
λ
N
Fiber
Amplifier
r
e
x
e
p
l
i
t
l
u
M
1
x
N
Gain
Equalizer
˜
λ
1D
Add/Drop
Dispersion
Compensator
A/D
( )
τ λ
λ λ ...,λ
2
1A,
N
λ
1A
Rx
#1D
Tx
#1A
Node 1
Filter applications in a simplified WDM system
1
x
N
D
e
m
u
l
t
i
l
p
e
x
e
r
Rx
#1A
Rx
#2
Rx
#N
Terminal B
3.46 Photonic Materials and Devices
Prof. Lionel C. Kimerling
Lecture 3: System Design
Page 2 of 4
Lecture
Notes
Out 1
Out 1
Splitter
Splitter
Combiner
Combiner
Out 2
Out 2
ΔL/2
Turning mirrors
Turning mirrors
(a)
(b)
(c)
A Mach-Zehnder interferometer: (a)
free-space propagation, (b) waveguide
device, (c) transmission response.
3.46 Photonic Materials and Devices
Prof. Lionel C. Kimerling
Lecture 3: System Design
Page 3 of 4
Lecture
Notes
Splitters & Combiners
Out 1
L/2
R L
π
2=
/
Out 2
rectional
Di
couplers
In
Out 2
(a)
In
Out 1
(b)
(c)
A Fabry-Perot interferometer: (a) free-
space propagation, (b) waveguide | https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/1190b2ecfa7f8d9a1da501aff52c7a1f_3_46l3_sysdesign.pdf |
1
(b)
(c)
A Fabry-Perot interferometer: (a) free-
space propagation, (b) waveguide
analog, and (c) transmission response.
3.46 Photonic Materials and Devices
Prof. Lionel C. Kimerling
Lecture 3: System Design
Page 4 of 4 | https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/1190b2ecfa7f8d9a1da501aff52c7a1f_3_46l3_sysdesign.pdf |
System Architecture
IAP Lecture 2
Ed Crawley
January 11, 2007
Rev 2.0
Massachusetts Institute of Technology © Ed Crawley 2007
1
Today’s Topics
(cid:122) Definitions - Reflections
(cid:122) Form
(cid:122) Function
(cid:122) Reference PDP - “In the Small”
Massachusetts Institute of Technology © Ed Crawley 2007
2
¿ Reflections on Definitions?
(cid:122) System
(cid:122) Complex
(cid:122) Value
(cid:122) Product
(cid:122) Principle/Method/Tool
Massachusetts Institute of Technology © Ed Crawley 2007
3
Architecture
(cid:122) Consists of:
– Function
– Related by Concept
– To Form
Form
n
o
i
t
c
n
u
F
Concept
Massachusetts Institute of Technology © Ed Crawley 2007
4
¿ Form - Reflections?
(cid:122) What is the form of simple systems?
(cid:122) How did you express the decompositional view?
(cid:122) How did you represent the graphical structural view?
(cid:122) How did you represent the list like structural view?
(cid:122) What do the lines in the graphical view, or
connections in the list view represent? Is it form?
(cid:122) Did you identify classes of structural relations?
Massachusetts Institute of Technology © Ed Crawley 2007
5
Form Topics
(cid:122) Representing form and structure
(cid:122) Whole product system and use context
(cid:122) Boundaries and Interfaces
(cid:122) Attributes and states
Massachusetts Institute of Technology © Ed Crawley 2007
6
Representations of Form
Form can be represented by:
• Words (in natural language - nouns)
• Code
• Illustrations, schematics, drawings
• Each discipline has developed shorthand for
representing their discipline specific form
But the form is the actual suggested
physical/informational embodiment.
Massachusetts Institute of Technology © Ed Crawley 2007
7
Duality of Physical and Informational Form
(cid:122) Physical form can always be represented by
informational form (e.g. a building and a drawing of a
building)
(cid:122)
Informational form must | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
form can always be represented by
informational form (e.g. a building and a drawing of a
building)
(cid:122)
Informational form must always be stored or encoded in
physical form (data in a CD, thought in neurons, poetry
in print)
(cid:122) So there is a duality of the physical and informational
world (styled after the duality of particles and waves)
Massachusetts Institute of Technology © Ed Crawley 2007
8
Classes of Structural Connections
(cid:122) Connections that are strictly descriptions of
form:
– Relative spatial location or topology (e.g. above, is next
to, is aligned with, is within, overlaps with, etc.) which
refers to previous arranging process
– Information about assembly/implementation (e.g.
connected to, bolted to, compiled with, etc.) which
refers to previous assembling/implementing process
(cid:122) Connections that are description of function
while operating (still to be discussed)
Massachusetts Institute of Technology © Ed Crawley 2007
9
Spatial/Topological Structure - Whistle
Channel
Step
Hole
Bump
Figure by MIT OCW.
Ramp
Cavity
wall
Product/system boundary
Is a boundary
Hole
Is a boundary of
Is aligned with
Bump
Touches
Channel
Touches
Ramp
Is a
boundary of
Step
Is a
boundary of
Cavity
Touches
Massachusetts Institute of Technology © Ed Crawley 2007
10
OPM Object-Object Structural Links
(cid:122) Defined: A structural link is the symbol that represents a
binary relationship between two objects.
(cid:122) There is also a backward direction relation.
(cid:122) Usually it is only necessary to show one, and the other
is implicit.
Massachusetts Institute of Technology © Ed Crawley 2007
11
Structural Link Examples
Chair
Data
Disk
Is under
Is stored in
n
Table
Array
contacts
25
Blades
Spatial
(under)
Topological
(within)
Topological
(touching)
Wheels
are bolted to
Axel
Implementation
Capacitor
Is connected to
Resistor
Implementation
Massachusetts Institute of Technology © Ed Crawley 2007
12
Spacial/Topological Structure - Whistle -
“List” | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
Implementation
Massachusetts Institute of Technology © Ed Crawley 2007
12
Spacial/Topological Structure - Whistle -
“List”
Bump
Channel
Ramp
Hole
Step
Cavity
Bump
Channel
touches
Ramp
Hole
Step
Cavity
touches
touches
touches
Is bounded
by
Is bounded
by
touches
Is a
boundary of
Is aligned
with
Is a
boundary of
Is a
boundary of
Is aligned
with
Is a
boundary of
touches
Is bounded
by
Is bounded
by
(cid:122) “N-squared” matrix representation gives a list-like representation
of connectivity (read from left row-wise)
(cid:122) Symmetric (with transformation like “surrounds” to “within”, “is
a boundary of” to “is bounded by”)
Is non-causal - no sense of anything happening before anything
else
(cid:122)
Massachusetts Institute of Technology © Ed Crawley 2007
13
Implementation Structure - Whistle
Channel
Step
Hole
Bump
Figure by MIT OCW.
Ramp
Cavity
wall
Product/system boundary
Bump
Mech.
integral
Channel
Mech.
integral
Ramp
Mech.
integral
Hole
Step
Mech.
integral
Mech.
integral
Cavity
Massachusetts Institute of Technology © Ed Crawley 2007
14
Implementation Structure - Whistle - “List”
Bump
Channel
Ramp
Hole
Step
Cavity
Mech.
integral
Mech.
integral
Mech.
integral
Bump
Channel
Ramp
Hole
Step
Cavity
Mech.
integral
Mech.
integral
Mech.
Integral
Mech.
integral
Mech.
integral
Mech.
integral
Mech.
integral
Mech.
Integral
(cid:122) “N-squared” matrix representation gives a list-like representation
of connectivity (read from left row-wise)
(cid:122) Symmetric, non-causal - no sense of anything happening before
anything else
(cid:122) Spatial and implementation structure can be combined by
Massachusetts
showing one in the upp
Institute of Technology © Ed Crawley 2007
er and one in the lower diagonal
15
Issues Raised
(cid:122 | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
the upp
Institute of Technology © Ed Crawley 2007
er and one in the lower diagonal
15
Issues Raised
(cid:122) How do you identify form independent of function?
(cid:122) How do you define the atomic part level? For hardware?
For software?
(cid:122) Can you “decompose” an integral part? Can a part be a
system? Can the features of a part be elements?
(cid:122) How do you represent the structural interconnections of
the elements, as opposed to their “membership” in a
system?
(cid:122) N occurrences of an element - count once or N times?
The class or each instance?
(cid:122) Connectors and interface elements - count as a separate
element - or combined with other elements?
(cid:122) Are there important elements lost in implementation?
(cid:122) Are there elements important to delivering value other
than the product?
Massachusetts Institute of Technology © Ed Crawley 2007
16
Form of Product/System - Questions
(cid:122) What is the product system?
(cid:122) What are its principle elements?
(cid:122) What is its formal structure (i.e. the structure of
form)?
Massachusetts Institute of Technology © Ed Crawley 2007
Figure by MIT OCW.
17
The Whole Product System
Figure by MIT OCW.
(cid:122) We usually architect form which is both a product and a system,
and which we designate the product/system
(cid:122) Often for the product/system to deliver value, it must be joined and
supported by other supporting systems
(cid:122) Most often, one of the other objects in the supporting system is the
operator, the intelligent (usually human) agent who actually
operates the system when it is used
(cid:122) Together, the product/system, plus these other supporting systems,
constitute the whole product system
Whole Product System
Supporting
System 1
Supporting
System 2
?
Operator
Supporting
System 3
Supporting
System 4
Massachusetts Institute of Technology © Ed Crawley 2007
Whole product system
18
Product/system boundary
Whole Product System -
Whistle
(cid:122) The whistle only
requires an operator
and the air to work
Channel
Step
Hole
Bump
Ramp | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
(cid:122) The whistle only
requires an operator
and the air to work
Channel
Step
Hole
Bump
Ramp
Cavity
wall
Massachusetts Institute of Technology © Ed Crawley 2007
Whole product system
Whistle
Operator
Bump
Channel
Ramp
Step
Hole
Cavity
wall
Star
Ring
External
Air
Product/system
boundary
610
19
Accountability for the Whole Product
System
(cid:122) Even though you only supply the product/system,
all of the other supporting systems must be present
and work to yield value -BEWARE!!!
(cid:122) They may fail to be present, or fail to work
(cid:122) You will find that you are:
(cid:122) Responsible for the product system - BUT
(cid:122) Accountable for the whole product system
(cid:122) What architecting approaches can you use to make sure
this is not a problem?
Massachusetts Institute of Technology © Ed Crawley 2007
20
Use Context
Figure by MIT OCW.
(cid:122) The whole product system fits within a use context, which included
the objects normally present when the whole product operates, but
not necessary for it to deliver value
(cid:122) The product/system usually interfaces with all of the objects in the
whole product system, but not with the objects in the use context
(cid:122) The whole product system is usually use independent, while the
use context is context dependent
(cid:122) The use context informs the function of the product/system
Use Context
Use Context
System 1
Use Context
System 2
Whole Product System
Use Context
System 3
Use Context
System 4
Supporting
System 1
Supporting
System 2
??
Operator
Supporting
System 3
Supporting
System 4
Massachusetts Institute of Technology © Ed Crawley 2007
Product/system boundary
Whole product system
Use context
21
Use Context Informs Design
(cid:122) The use context informs the requirements for and the
design of the product/system
(cid:122) For example:
– Whistle use context: sporting event, toys, Olympic stadium?
– Op amp use context: lab bench, consumer audio, spacecraft?
The architect must have a view of three layers of the form:
The product | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
stadium?
– Op amp use context: lab bench, consumer audio, spacecraft?
The architect must have a view of three layers of the form:
The product/system, the whole product and use context
Massachusetts Institute of Technology © Ed Crawley 2007
22
Boundaries and Interfaces
Figure by MIT OCW.
(cid:122) The product/system is separated from other supporting
systems and the operand by a boundary
(cid:122) The boundary is vital to the definition of architecture,
because it defines:
– What you architect, eventually deliver and are
responsible for
– What is “fixed” or “constrained” at the boundaries
(cid:122) Everything that crosses a boundary must be facilitated
(cid:122)
by an interface
It is good practice to draw the system boundary clearly
on any product/system representation, and identify
interfaces explicitly
Massachusetts Institute of Technology © Ed Crawley 2007
23
Camera - Whole Product & Boundaries
Supplied with camera
Memory Card
Wrist Strap
Digital
Digital Camera
Battery
e
l
b
a
c
o
e
d
i
V
Software
Card Reader
USB Port
USB interface cable
USB Port
Mac
PC
Video
TV/Video
USB Port
Printer
(cid:122)What is the whole product system
(cid:122)What is the use context?
(cid:122)What are the boundaries?
(cid:122)What are the interfaces?
Massachusetts Institute of Technology © Ed Crawley 2007
Figure by MIT OCW.
24
Op Amp - Whole Product System
Whole
Amp System
Amp
Power
supply
Ground
Output
circuit
Input
circuit
Op Amp
R1
R2
Ground
Interface
Output
Interface
+ Input
Interface
- input
Interface
-5 V
Interface
+ 5 V
Interface
Product/system boundary
(cid:122) There will be an interface to other elements of the whole product system
(cid:122) Therefore knowledge of these elements informs interface control
Massachusetts Institute of Technology © Ed Crawley 2007
25
Whole product system
- 5
V
+5
V
Amp - Whole Product and Boundaries
R1
R2
Vin
-+
Input
circuit
Ground
Interface
Vout
Ground
Power | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
- Whole Product and Boundaries
R1
R2
Vin
-+
Input
circuit
Ground
Interface
Vout
Ground
Power
supply
+ Input
interface
- Input
interface
R1
R2
- 5 V
interface
+ 5 V
interface
Op Amp
Output
interface
Output
circuit
Product/system boundary
(cid:122)What is the whole product system
(cid:122)What is the use context?
(cid:122)What are the boundaries?
(cid:122)What are the interfaces?
Massachusetts Institute of Technology © Ed Crawley 2007
26
Amp - Whole
Product System
Boundary and
Interfaces
(cid:122)
(cid:122)
In the matrix
representation,
the boundary
is between
rows and
columns
Interfaces are
clearly
identified in the
block off
diagonals
lower diagonal spacial/topological, upper implementation
e
c
a
f
r
e
t
n
i
t
u
p
n
+
i
e
c
a
f
r
e
t
n
i
t
u
p
n
i
-
e
c
a
f
r
e
t
n
i
t
u
p
t
u
o
2
r
o
t
s
i
s
e
r
p
m
a
p
o
e
e e
x ei ei ei
t x
t
t
x
x
x
t
t
t
t
t
t
e
c
a
f
r
e
t
n
i
d
n
u
o
r
g
e
x
t
e
c
a
f
r
e
t
n
i
V
5
-
e
c
a
f
r
e
t
n
i
V
5
+
t
i
u
c
r
i
c
t
u
p
t
u
o
l
y
p
p
u
s
r
e
w
o
p
t
i
u
c
r
i
c
t
u
p
n
i
d
n | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
o
p
t
i
u
c
r
i
c
t
u
p
n
i
d
n
u
o
r
g
ei ei
x
x
t
t
e
x
t
e
x
t
e
e
e
e
e
x e
t x
Implementation
Interfaces
1
r
o
t
i
s
e
r
x
t
t
resitor 1
resistor 2
op amp
+input interface
-input interface
output interface
ground interface
-5 V interface
+5 V interface
input circuit
output circuit
power supply
ground
t = touching, tangent
b = boundary
w = within, s = surrounding
ov = overlapping
da = at a specified distance or angular allignment
Spatial
Interfaces
m = mechanical connected (pressing, bolted, bonded, etc.)
e = electrical connected (soldered, etc.)
c = compilation with
no second symbol implies connector of some sort
i as second symbol implies integral
Massachusetts Institute of Technology © Ed Crawley 2007
27
Software - Boundaries and Interfaces
Interface?
temporary = array [ j+1 ]
array[ j+1 ] = array [ j ]
array[ j ] = temporary
Procedure exchange_contents(List array,
number j)
temporary = array [ j+1 ]
array[ j+1 ] = array [ j ]
array[ j ] = temporary
return array
Interface?
Product/system boundary
Product/system boundary
(cid:122) The act of trying to draw a boundary will often reveal a
poorly controlled or ambiguous interface
Massachusetts Institute of Technology © Ed Crawley 2007
28
Whole Product System - Code Bubblesort
Procedure bubblesort (List array, number length_of_array)
for i=1 to length_of_array
for j=1 to length_of_array - i
if array[ j ] > array [ j+1 ] then
exchange_contents (array, j)
end if
end of j loop
end of i loop
return array
End of procedure
Procedure exchange_contents(List array,
number j)
temporary = array [ j+1 ]
array[ j+1 ] = array [ j ]
array[ j ] = temporary
return array
Product/system boundary
(cid | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
array [ j+1 ]
array[ j+1 ] = array [ j ]
array[ j ] = temporary
return array
Product/system boundary
(cid:122)What is the whole product system
(cid:122)What is the use context?
(cid:122)What are the boundaries?
(cid:122)What are the interfaces?
Massachusetts Institute of Technology © Ed Crawley 2007
29
Bubblesort -
Whole Product
Boundary and
Interfaces
(cid:122) The structural
interfaces in
software determine
sequence and
nesting, and
compilation
(cid:122) Example assumes that
procedure
exchange_contents is
separately compiled and
called
lower diagonal spacial/topological
upper implementation
t
s
n
i
p
o
o
l
j
t
s
n
i
f
i
t
s
n
i
)
1
+
j
(
a
t
s
n
i
p
m
e
t
t
s
n
i
)
j
(
a
t
s
n
i
p
o
o
l
I
e
t
a
t
s
c
o
r
p
x c
f
c
c
c
x c
fw x c
fw x
fw x c
w f
w
c
x c
x
f
f
e
t
a
t
s
n
r
u
t
e
r
c
c
c
c
x
proc state
I loop inst
j loop inst
if inst
temp inst
a(j+1) inst
a(j) inst
return state
t = touching, tangent
b = boundary
w = within, s = surrounding
ov = overlapping
da = at a specified distance or angular allignment
f = follows in sequence, l = leads in sequence
m = mechanical connected (pressing, bolted, bonded, etc.)
e = electrical connected (soldered, etc.)
c = compilation with
no second symbol implies connector of some sort
i as second symbol implies integral
Massachusetts Institute of Technology © Ed Crawley 2007
30
Static Graphical User
Interface - Whole
Product?
(cid:122)What is the whole product system
(cid:122 | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
Static Graphical User
Interface - Whole
Product?
(cid:122)What is the whole product system
(cid:122)What is the use context?
(cid:122)What are the boundaries?
(cid:122)What are the interfaces?
Massachusetts Institute of Technology © Ed Crawley 2007
Courtesy of ENPHO. Used with permission.
31
Form of Whole Product - Questions
(cid:122) What is the product system? Its elements? Its formal
structure?
(cid:122) What are the supporting systems?
(cid:122) What is the whole product system?
(cid:122) What is the use context?
(cid:122) What are the boundaries?
(cid:122) What are the interfaces?
Figure by MIT OCW.
Characterization
(cid:122) Characterization/Exhibition
– The relation between an object
and its features or attributes
– Some attributes are states
Skateboard
Brand
Length
Velocity
Massachusetts Institute of Technology © Ed Crawley 2007
Figure by MIT OCW.
s
e
z
i
r
e
t
c
a
r
a
h
c
O
A
e
x
h
b
i
t
s
i
O
A
Decomposes to
Has attribute of
33
State
Figure by MIT OCW.
(cid:122) Defined: State is a situation in which the object
can exist for some positive duration of time (and
implicitly can or will change)
(cid:122) The combination of all the states describes the
possible configuration of the system throughout
the operational time.
(cid:122) The states can be shown with the object, or
alternatively within an attribute object.
Skateboard
Stopped Rolling
Skateboard
Velocity
Stopped Rolling
Massachusetts Institute of Technology © Ed Crawley 2007
34
Summary - Form
(cid:122) The physical/informational embodiment which exists, or
has the potential to exist
(cid:122) Objects + Structure (of form)
(cid:122)
Is a system attribute, created by the architect
(cid:122) Product/system form combines with other supporting
systems (with which it interfaces at the boundary) to
form the whole product system that creates value
Massachusetts Institute of Technology © Ed Crawley 2007
35
Architecture
(cid: | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
to
form the whole product system that creates value
Massachusetts Institute of Technology © Ed Crawley 2007
35
Architecture
(cid:122) Consists of:
– Function
– Related by Concept
– To Form
Form
n
o
i
t
c
n
u
F
Concept
Massachusetts Institute of Technology © Ed Crawley 2007
36
Function Topics
(cid:122) Function
(cid:122) Emergence
(cid:122) Process + Operand
(cid:122) External function
Internal function
(cid:122)
Massachusetts Institute of Technology © Ed Crawley 2007
37
Function - Defined
Figure by MIT OCW.
(cid:122) The activities, operations and transformations
that cause, create or contribute to performance
(i.e. meeting goals)
(cid:122) The actions for which a thing exists or is
employed
(cid:122)
Is a product/system attribute
Massachusetts Institute of Technology © Ed Crawley 2007
38
Function - Described
(cid:122)
Is conceived by the architect
(cid:122) Must be conceived so that goals can be achieved
(cid:122)
Is what the system eventually does, the activities
and transformations which emerge as sub-
function aggregate
(cid:122) Should initially be stated in solution neutral
language
Massachusetts Institute of Technology © Ed Crawley 2007
39
Function - Other views:
(cid:122) Function is what the system does
Form is what the system is [Otto]
(cid:122) Function is the relationship between inputs and
outputs, independent of form [Otto]
(cid:122) Process is a transformation (creation or change)
applied to one or more objects in the system
[Dori]
Massachusetts Institute of Technology © Ed Crawley 2007
40
Function is Associate with Form
(cid:122) Change voltage proportional to current
(cid:122) Change voltage proportional to charge
(cid:122) React translation forces
(cid:122) Carry moment and shear
(cid:122) Conditionally transfer control
if array[ j ] > array [ j+1 ] then
(cid:122) Assign a value
temporary = array [ j+1 ]
Massachusetts Institute of Technology © Ed Crawley 2007
41
Emergence
(cid:122) As elements of form are brought together, new | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
achusetts Institute of Technology © Ed Crawley 2007
41
Emergence
(cid:122) As elements of form are brought together, new function
Figure by MIT OCW.
emerges
(cid:122) Unlike form, which simply aggregates, functions do not
(cid:122)
(cid:122)
combine in any “linear” way - it is often difficult or
impossible, a priori, to predict the emergent function
In design, the expected function may appear, may fail to
appear, or an unintended function may appear
It is exactly this property of emergence that gives
systems their “power”- definition of ‘system’:
– A set of interrelated elements that perform a function,
whose functionality is greater than the sum of the
parts
Massachusetts Institute of Technology © Ed Crawley 2007
42
Function Emerges as Form Assembles
(cid:122) Attenuate high frequency
(cid:122)
Increase force
Vin
Fin
Vout
Fout
(cid:122) Amplify low frequency signal
Vin
Vout
(cid:122) Regulate shaft speed
Emergent function depends on
elements and structure
Massachusetts Institute of Technology © Ed Crawley 2007
43
Function Emerges as Form Assembles
Procedure bubblesort (List array, number length_of_array)
for i=1 to length_of_array
for j=1 to length_of_array - i
if array[ j ] > array [ j+1 ] then
Conditionally
Exchange
Contents
Exchange
Contents
temporary = array [ j+1 ]
array[ j+1 ] = array [ j ]
array[ j ] = temporary
Sort from
small to
large
end if
end of j loop
end of i loop
return array
End of procedure
Massachusetts Institute of Technology © Ed Crawley 2007
44
Form - Function Sequence
Function
definition
Mapping
Mapping
Form
definition
Conceptual
design
Reverse
engineering,
Bottom up
design,
Design
knowledge
capture
Massachusetts Institute of Technology © Ed Crawley 2007
45
Design vs. Reverse Engineering
(cid:122)
(cid:122)
(cid:122)
(cid:122)
In (conventional, top down) design, you know the
functions (and presumably the goals) and try to
create the form to deliver the function | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
(conventional, top down) design, you know the
functions (and presumably the goals) and try to
create the form to deliver the function
In reverse engineering, you know the form, and
are trying to infer the function (and presumably
eventually the goals)
In bottom up design, you build elements and
discover what emerges as they come together
In design knowledge capture, you record the
function along with the design of form
Massachusetts Institute of Technology © Ed Crawley 2007
46
Exercise: Reverse Engineering of Function
(cid:122) This is an exercise to demonstrate reverse engineering
of function of objects with which you are not personally
familiar
(cid:122) What is the function of:
– A notebook?
– A pen?
– A glass?
– A set of instructions?
– An object I give you?
(cid:122) Examine the object, and describe its form
(cid:122) Try to discern and describe its function
Function and form are completely separate attributes.
Even when form is evident, function is not easily inferred without
a working knowledge of operation, operand and whole product system.
Massachusetts Institute of Technology © Ed Crawley 2007
47
Function = Process + Operand
(cid:122) Note that function consists of the action or
Figure by MIT OCW.
transformation + the operand, an object which is acted
on, or transformed:
– Change voltage
– React force
– Amplify signal
– Regulate shaft speed
– Contain liquid
– Lay down ink
– Stop an operator
– Count circles
(cid:122) The action is called a process
(cid:122) The object that is acted upon is called the operand
Massachusetts Institute of Technology © Ed Crawley 2007
48
Processes
Processing
(cid:122) Defined: A process is the pattern of transformation
applied to one or more objects
(cid:122) Cannot hold or touch a process - it is fleeting
(cid:122) Generally creation, change, or destruction
(cid:122) A process relies on at least one object in the pre-
process set
(cid:122) A process transforms at least one object in the pre-
process set
(cid:122) A process takes place along a time line
(cid:122) A process is associated with a verb
Massachusetts Institute of Technology © Ed Crawley 2007
49 | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
time line
(cid:122) A process is associated with a verb
Massachusetts Institute of Technology © Ed Crawley 2007
49
About Processes
(cid:122) Processes are pure activity or transformation:
– Transporting
– Transporting
– Powering
– Supporting
– Sorting
(cid:122) Time dependent, they are there and gone
(cid:122) What a system does, vs. what it is (the form)
(cid:122)
Imagine a process that has been important to you -
can you recreate it?
(cid:122) Try to draw a picture of a process
Massachusetts Institute of Technology © Ed Crawley 2007
50
The Operand
Operand
Processing
Function
(cid:122) Operand is the object which is acted upon by the
process, which may be created, destroyed, or altered:
– Image is captured
– Signal is amplified
– Array is sorted
(cid:122) You often do not supply the operand, and there may be
more than one operand to consider
(cid:122) The double arrow is the generic link, called “effecting”
(cid:122) A single headed arrow can be used to represent
“producing” or “consuming”
Massachusetts Institute of Technology © Ed Crawley 2007
51
Value Related Operand
(cid:122) The product/system always operates on one or
(cid:122)
several operands
It is the change in an attribute of one of an operands
that is associated with the delivered value of the
product system
(cid:122) Because of its importance to value, it is useful to
specially designate this value related operand
Whole product System
Product/
System
Value related
Operand
Other
Operands
Supporting
Systems
Massachusetts Institute of Technology © Ed Crawley 2007
610
52
Camera - Value Related Operand?
Product/system boundary
Massachusetts Institute of Technology © Ed Crawley 2007
53
What’s the value related operand?
Figure by MIT OCW.
External Function Produces Benefit
Figure by MIT OCW.
(cid:122) The system function which emerges at the highest level
is the externally delivered function = process + value
related operand
– Amplify low frequency signal
– Regulate shaft speed
– Sort Array
– Ensure safety of passenger
– Provide shelter to homeowner
It is the external function that delivers benefit, and | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
Regulate shaft speed
– Sort Array
– Ensure safety of passenger
– Provide shelter to homeowner
It is the external function that delivers benefit, and
therefore value, of the product system
(cid:122)
(cid:122) The form generally is an instrument of delivering benefit,
but is not unique to the benefit delivery
– If a competitor delivers the same function with another object
which is superior, they will displace you
Massachusetts Institute of Technology © Ed Crawley 2007
610
54
Externally Delivered Function
External function at the
interface: supporting +
vehicle
product/system boundary
(cid:122) External function is
delivered across a
boundary - the value
related operand is
external to the
product/system
(cid:122) External function is
linked to the
delivery of benefit
(cid:122)What is the value related operand?
(cid:122)What are the value related states?
(cid:122)What is the externally
delivered function?
Massachusetts Institute of Technology © Ed Crawley 2007
Figure by MIT OCW.
610
55
Externally Delivered Function
External function at the
interface: produce +
digital image
Product/system boundary
Massachusetts Institute of Technology © Ed Crawley 2007
External function at the
interface: produce +
analog image
Figure by MIT OCW.
610
56
Form Enables Process
Processing
Instrument
Object
(cid:122) Enablers of a process is an object that must be present
for that process to occur, but does not change as a
result of the occurrence of the process
(cid:122) Examples:
– Capturing by a digital camera
– Amplifying with an operational amplifier
– Sorting with bubblesort routine
(cid:122) The “arrow” with the circular end represents and
enabler, which can be an instrument (non-human) or an
agent (human)
Massachusetts Institute of Technology © Ed Crawley 2007
57
Three Core Ideas
(cid:122) Operand - an element of form external to the product
system, whose change is related to benefit and therefore
value
(cid:122) Process - the transformation or change in the operand
(cid:122) Form - the instrument that executes the transformation
(and attracts cost!) and consists of elements plus
structure
(cid:122) Wouldn’t it be nice if we had a semantically exact | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
executes the transformation
(and attracts cost!) and consists of elements plus
structure
(cid:122) Wouldn’t it be nice if we had a semantically exact way of
representing these 3 core ideas
(cid:122) How can we do with conventional representations?
Massachusetts Institute of Technology © Ed Crawley 2007
610
58
Semantically Exact Representation with OPM
Operand
Processing
Instrument
Object
Function
Form
(cid:122) Architecture is made up of operands + processes
(functions) plus instrument object (form)
(cid:122) Examples:
– Image is captured by digital camera
– Low frequency signal is amplified with an operational amplifier
– Tone is created by whistle
– Array is sorted by bublesort routine
Massachusetts Institute of Technology © Ed Crawley 2007
610
59
Value Related State Representation with
OPM
Operand
Value states
Existing Desired
Processing
Instrument
Object
Value Related Function
Form
(cid:122) Architecture delivers value when the externally delivered
operand changes its state through the action of the
processes enabled by instrument object (form)
(cid:122) Examples:
– Image (none to existing) is captured by digital camera
– Low frequency signal (low amplitude to higher amplitude) is
amplified with an operational amplifier
– Tone (none to existing) is created by whistle
– Array (unsorted to sorted) is sorted by bubblesort routine
Massachusetts Institute of Technology © Ed Crawley 2007
60
A Tool - Object Process Modeling
(cid:122) Object: that which has the potential
of stable, unconditional existence for
some positive duration of time.
(cid:122) Form is the sum of objects and their
structure
(cid:122) Process: the pattern of
transformation applied to one or
more objects. Processes change an
object or its attribute.
(cid:122) Function emerges from processes
acting on operands
(cid:122) All links between objects and
processes have precise semantics
Objects
Processes
Massachusetts Institute of Technology © Ed Crawley 2007
61
OPM Process Links
(cid:122) P changes O (from state A to B).
(cid:122) P affects O
Here
Person
There
Transporting
Person
Transporting
(cid:122) P yields or creates O
Entropy
Transporting
(cid:122) P consumes or destroys | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
Transporting
Person
Transporting
(cid:122) P yields or creates O
Entropy
Transporting
(cid:122) P consumes or destroys O
Energy
Transporting
(cid:122) O is an agent of P (agent)
Operator
Transporting
(cid:122) O is and instrument of P
Skateboard
Transporting
(cid:122) P occurs if O is in state A
Enough
Money
None
Purchasing
(cid:122) P1 invokes P2 directly
Purchasing
Transporting
Massachusetts Institute of Technology © Ed Crawley 2007
62
Value
Figure by MIT OCW.
(cid:122) Value is benefit at cost
– Simple metrics are benefit/cost, benefit - cost, etc.
– Sometimes more complex metrics
(cid:122) Benefit is worth, importance, utility as judged by a
subjective observer (the beneficiary)
(cid:122) Another view:
– Value: “how various stakeholders find particular
worth, utility, benefit, or reward in exchange for their
respective contributions to the enterprise” [Murman,
et al. LEV p178]
Massachusetts Institute of Technology © Ed Crawley 2007
63
Value and Architecture
(cid:122) Value is benefit at cost
(cid:122) Benefit is driven by function
externally delivered across the
interface
(cid:122) Cost is driven by the design of the
form - “parts attract cost”
(cid:122) The relationship of function to form is
therefore the relationship of benefit to
cost
Cost
Form
Concept
t
i
f
e
n
e
B
n
o
i
t
c
n
u
F
A good architecture delivers benefit at competitive cost
Massachusetts Institute of Technology © Ed Crawley 2007
64
Value - Questions?
(cid:122) What is the value related operand?
(cid:122) What are the value related states that change?
(cid:122) What is the externally delivered function?
Massachusetts Institute of Technology © Ed Crawley 2007
Figure by MIT OCW.
65
Externally Delivered Function Emerges from
Internal Function
(cid:122) System function which emerges at the highest level is
the externally delivered function which usually acts on
the operand
– Example: amplify low frequency signal
(cid:122) Systems have | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
at the highest level is
the externally delivered function which usually acts on
the operand
– Example: amplify low frequency signal
(cid:122) Systems have internal functions, which combine to
produce the emergent externally delivered function
– Example: amplify …, change voltage ...
Massachusetts Institute of Technology © Ed Crawley 2007
66
Externally Delivered and Internal Function
External function at the
interface: supporting +
vehicle
Product/system boundary
Carrying +
tension
(cid:122) External function is
delivered across a
boundary - the
operand is external
to the
product/system
(cid:122) External function
emerges from
internal function,
which is executed
by form
Massachusetts Institute of Technology © Ed Crawley 2007
Interfacing + Reacting +
Bridge and
ramp
loads
Carrying +
compression
Figure by MIT OCW.
610
67
Externally Delivered and Internal Function
Restrain +
camera
Capture +
digital image
Store + digital image
External function at the
interface: produce +
digital image
Product/system boundary
Power +
camera
External function at the
Interface +
Camera and interface: produce +
computer
analog image
Massachusetts Institute of Technology © Ed Crawley 2007
Figure by MIT OCW.
610
68
Externally Delivered and Internal Function
External function at the
interface: supporting +
vehicle
oduct/system boundary
Carrying +
tension
(cid:122)
External function is
delivered across a
boundary - the
Pr
operand is external
to the
product/system
(cid:122) External function
emerges from
nternal functi
i
hich is exec
w
y form
b
on,
uted
Massachusetts Institute of Technology © Ed Crawley 2007
Interfacing + Reacting +
Bridge and
ramp
loads
Carrying +
compression
Figure by MIT OCW.
610
69
Delivered and Internal Function - Whistle
Figure by MIT OCW.
Product/system boundary
Channel
Step
Hole
Bump
Ramp
Cavity
wall
Is a boundary
Hole
Is a boundary of
Is aligned with
Bump
Touches
Channel
Touches
Ramp
Is a
boundary of
Step
Is a
boundary of
Cavity
Touches
(cid | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
Bump
Touches
Channel
Touches
Ramp
Is a
boundary of
Step
Is a
boundary of
Cavity
Touches
(cid:122) What is the value related operand and
(cid:122) What are the internal
states?
functions?
(cid:122) What is the externally delivered function?
(cid:122) How are they mapped to form?
Massachusetts Institute of Technology © Ed Crawley 2007
70
Delivered and Internal Function - Amp
R1
R2
Vin
-+
Vout
- 5
V
+5
V
Input
circuit
Ground
Interface
Ground
Power
supply
+ Input
interface
- Input
interface
R1
R2
- 5 V
interface
+ 5 V
interface
Op Amp
Output
interface
Output
circuit
(cid:122) What is the value related operand and
(cid:122) What are the internal
states?
functions?
(cid:122) What is the externally delivered function?
(cid:122) How are they mapped to form?
Massachusetts Institute of Technology © Ed Crawley 2007
71
Product/system boundary
Delivered and Internal Function - Bubblesort
Procedure bubblesort (List array, number length_of_array)
for i=1 to length_of_array;
for j=1 to length_of_array - i;
if array[ j ] > array [ j+1 ] then
exchange_content (array[ j ], array [ j+1 ])
end if
end of j loop
end of i loop
return array
End of procedure
Procedure exchange_contents(List array,
number i, number j)
temporary = array [ j+1 ]
array[ j+1 ] = array [ j ]
array[ j ] = temporary
return array
(cid:122) What is the value related operand and
(cid:122) What are the internal
states?
functions?
(cid:122) What is the externally delivered function?
(cid:122) How are they mapped to form?
Massachusetts Institute of Technology © Ed Crawley 2007
72
Product/system boundary
Summary - Function
(cid:122) Function is the activity, operations, transformations that
create or contribute to performance - it is operand +
process
(cid:122) Function is enabled by form, and emerges as form is
assembled
(cid:122) Externally function delivered to the | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
process
(cid:122) Function is enabled by form, and emerges as form is
assembled
(cid:122) Externally function delivered to the operand is linked to
the benefit of a product/system
(cid:122) Function is a system attribute, conceived by the
architect
Massachusetts Institute of Technology © Ed Crawley 2007
73
Reference PDP
(cid:122) PDP of Ulrich and Eppinger gives an “in the box”
reference
(cid:122) Starting point for Identify/Develop/Compare/Synthesize
process
(cid:122) Usable method for small groups (what are the
principles?)
Massachusetts Institute of Technology © Ed Crawley 2007
74
¿Reference PDP?
(cid:122) Why would you have a formalized PDP?
(cid:122) What are the essential features of the Ulrich and
Eppinger model?
(cid:122) What are the key limiting assumptions?
(cid:122) How does this compare with the PDP in your enterprise?
Massachusetts Institute of Technology © Ed Crawley 2007
75
Goals of a Structured PDP
(cid:122) Customer-focused products
(cid:122) Competitive product designs
(cid:122) Team coordination
(cid:122) Reduce time to introduction
(cid:122) Reduce cost of the design
(cid:122) Facilitate group consensus
(cid:122) Explicit decision process
(cid:122) Create archival record
(cid:122) Customizable methods
Massachusetts Institute of Technology © Ed Crawley 2007
Ref: Ulrich and Eppinger
76
Key Assumptions of U&E PDP
(cid:122)
In the “small”
(cid:122) Technology in hand
(cid:122) Not a corporate stretch
(cid:122) Relatively simple (probably physical) form
(cid:122) Build/Bust developmental approach
(cid:122) Focused on “the PDP proper”, not upstream + life cycle
Massachusetts Institute of Technology © Ed Crawley 2007
77
PDP
In the SMALL v. In the LARGE
Org
Product
Product:
SIMPLE
v.
COMPLEX
Process
Countable interfaces
Parts identifiable
Very many interfaces
Parts abstracted
Process:
STATIC
Goals & resources
constant
Organization:
SMALL
Team at a table
v.
v | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
Very many interfaces
Parts abstracted
Process:
STATIC
Goals & resources
constant
Organization:
SMALL
Team at a table
v.
v.
EVOLVING
Goals & resources
changing
LARGE
Big team
Other Factors: clean slate vs. reuse of legacy components; single
product vs. platform; collocated vs. distributed team; team in
enterprise vs. supplier involvement
Massachusetts Institute of Technology © Ed Crawley 2007
78
Summary - Reference PDP
(cid:122) A useful reference for in-the-small consumer focused
mechanically based products
(cid:122) A tool to reduce ambiguity by defining roles and
processes
(cid:122) Adaptable to a variety of contexts, staying within the key
assumptions
Massachusetts Institute of Technology © Ed Crawley 2007
79
Summary to Date
Operand
Processing
Instrument
Form
Architecture?
Form = Elements + Structure
Function = Process + Operand
Form in the instrument of function
Massachusetts Institute of Technology © Ed Crawley 2007
80 | https://ocw.mit.edu/courses/esd-34-system-architecture-january-iap-2007/119970d5c8944785abb7d2da18bd72d9_lec2.pdf |
THE DELTA-METHOD AND ASYMPTOTICS OF SOME ESTIMATORS
18.465, Feb. 24, 2005
√
The delta-method gives a way that asymptotic normality can be preserved under
nonlinear, but differentiable, transformations. The method is well known; one version of
it is given in J. Rice, Mathematical Statistics and Data Analysis, 2d. ed., 1995. A simple
form of it using only a first derivative, for functions of one variable, will be given here. (A
multidimensional version is used in Section 3.7 of Mathematical Statistics, 18.466 course
notes by R. Dudley, on the MIT OCW website.)
Theorem. Let Yn be a sequence of real-valued random variables such that for some µ and
σ, n(Yn − µ) converges in distribution as n → ∞ to N (0, σ2). Let f be a function from
R into R having a derivative f (cid:1)(µ) at µ. Then n[f (Yn) − f (µ)] converges in distribution
as n → ∞ to N (0, f (cid:1)(µ)2σ2).
Remarks. In statistics, where µ is an unknown parameter, one will want f to be dif-
ferentiable at all possible µ (and preferably, for f (cid:1) to be continuous, although that is not
needed in the proof).
Proof. We have Yn −µ = Op(1/ n) as n → ∞. Also, f (y) = f (µ)+f (cid:1)(µ)(y−µ)+o(|y−µ|)
as y → µ by definition of derivative. Thus
√
√
f (Yn) = f (µ) + f (cid:1)(µ)(Yn − µ) + op(|Yn − µ | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
(Yn) = f (µ) + f (cid:1)(µ)(Yn − µ) + op(|Yn − µ|),
so
√
√
n[f (Yn) − f (µ)] = f (cid:1)(µ) n(Yn − µ) + nop(1/ n).
√
√
The last term is op(1), so the conclusion follows.
�
Let’s say a distribution function F has a good median if F has a continuous density
F (cid:1) = f with f (m) > 0 at m, the median of F . More precisely, f (m) > 0 and f continuous
at m imply that F is strictly increasing in a neighborhood of m, so m is the unique x
with F (x) = 1/2 and so the unique median. Let’s find the asymptotic distribution of the
sample median. First let n = 2k + 1 odd, so the nth sample median mn = X(k+1). If F
is the U [0, 1] distribution, let its order statistics be U(1) < · · · < U(n). Recall that U(j)
has a beta distribution βj,n−j+1 for each j, so the sample median U(k+1) has a βk+1,k+1
distribution. Its density is xk(1 − x)k/B(k + 1, k + 1) for 0 ≤ x ≤ 1 and 0 elsewhere. The
distribution has mean 1/2 and variance 1/[4(2k + 3)] = 1/[4(n + 2)].
This beta distribution is asymptotically normal with its mean and variance as n → ∞
or equivalently k → ∞. This fact is a special case of facts known since about | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
equivalently k → ∞. This fact is a special case of facts known since about 1920, but
lacking a handy reference, I’ll indicate a proof. Let y = x − (1/2), so |y| ≤ 1/2 where the
density is non-zero. On that interval,
�k �
�k �
�k
�
x k(1 − x)k =
1
2
+ y
1
2
1
4
− y =
− y 2 = 4−k(1 − 4y
2)k
.
We have (1 − 4y2)k ≤ exp(−4ky2) for all y with |y| ≤ 1/2, and for any constant c and
|y| ≤ c/ k, k log(1−4y2)+4ky2 = O(k(4y2)2) = O(1/k) = O(1/n) as n → ∞ and k → ∞,
√
1
√
so for such y (depending on k), (1 − 4y2)k is asymptotic to exp(−4ky2). It follows that
βk+1,k+1 is asymptotically normal with mean 1/2 and variance 1/(8k) which is asymptotic
to 1/(4n). In other words n[U(k+1) − 1 ] converges in distribution as n → ∞to N (0, 1/4).
Now for any distribution function F with a good median m, and n = 2k + 1 odd,
the sample median mn = X(k+1) has the distribution of F ← (U(k+1)) because F ← is
monotonic (non-decreasing, and strictly increasing in a neighborhood of 2 ). We have
F ←(1/2) | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
creasing, and strictly increasing in a neighborhood of 2 ). We have
F ←(1/2) = m. √So by the delta-method theorem above, n(mn − m), being equal in
distribution to n(F ←(U(k+1)) − F ←(1/2)), converges in distribution as n → ∞ to
N (0, (F ←)(cid:1)(1/2)2/4) = N (0, 1/(4f (m)2)), as stated in Randles and Wolfe, p. 227, line
2, for symmetric distributions.
√
2
1
For n = 2k even, U(k) and U(k+1) have βk,k+1 and βk+1,k distributions respectively,
and |U(k+1) − U(k)| = Op(1/n). For the sample median mU,n = [U(k) + U(k+1)]/2, we then
also have |mU,n − U(k)| = Op(1/n). By a small adaptation of the argument for the n odd
case, we get that n(U(k) − 1
2 ) converges in distribution to N (0, 1/4) as n = 2k → ∞, and
so does n(mU,n − 1
2 ). So, for a distribution F with a good median m and sample medians
√
mn, we get n(mn − m) converging in distribution as n → ∞ to N (0, 1/(4f (m)2)), just
as when n is odd and as stated by Randles and Wolfe.
√
√
Next, let’s consider the Hodges-Lehmann estimator. In this case, beside assuming
F has a good median m, we’ll assume the distribution is symmetric around m.
(If a | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
has a good median m, we’ll assume the distribution is symmetric around m.
(If a
distribution is symmetric around a point θ, then θ must be the median.) In other words,
there is a density f0 with f0(−x) = f0(x) for all x, f0(0) > 0, f0 is continuous at 0, and the
density f is fm(x) ≡ f0(x − m), which is then symmetric around m. Given X1, ..., Xn i.i.d.
with a distribution F satisfying the given conditions, but otherwise unknown, the Hodges-
Lehmann estimator θˆHL is the median of the numbers (Xi+Xj)/2 for 1 ≤ i ≤ j ≤ n. There
are n(n + 1)/2 of these numbers (which are called Walsh averages). The sample median is
an estimator of the unknown m, and θˆHL is another which is often better. To look into it
we’ll consider some U -statistics. For any real x, x1, and x2 let hx(x1, x2) = Ψ(2x−x1 −x2).
This kernel is symmetric under interchanging x1 and x2 for each x.
We want to find the asymptotic behavior of θˆHL − m, specifically, that it’s asymptot-
ically normal with mean 0 and variance C/n for some C depending on F . In doing this,
we can assume m = 0, because subtracting m from all the observations makes m = 0 and
doesn’t change the distribution of the difference. So we can assume F is symmetric around
0.
Let G be the distribution function of X1 + X2. Then G has a density g given by the
� ∞
convolution of f with itself, g(x) = −∞ f (x − y)f (y)dy. We have for all x
Ehx(X1, X2) = P (X1 + X2 < 2 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
y)dy. We have for all x
Ehx(X1, X2) = P (X1 + X2 < 2x) = G(2x).
The quantity called ζ1, entering into the asymptotic variance of the U -statistic formed
from the kernel hx, is given by
ζ1 = P (X1 + X2 < 2x, X1 + X3 < 2x) − G(2x)2 .
We are interested especially in x = 0 since that is now the median and center of symmetry
of F and of G. For x = 0 we get
P (X1 + X2 < 0, X1 + X3 < 0) =
−∞
2
� ∞
F (−u)2dF (u) =
� ∞
−∞
[1 − F (u)]2dF (u) =
� 1
0
(1 − t)2dt = 1/3,
and Eh0 = 1/2, so ζ1 = 1/12. We have a kernel of order r = 2, and the asymptotic
variance of a U -statistic is r2ζ1. Defining a U -statistic depending on x we have
(n)
(x)
U
=
� �−1
n
2
�
1≤i<j≤n
Ψ(x − Xi − Xj).
For x = 0, bearing in mind that under symmetry around 0, −Xi−Xj is equal in distribution
to Xi + Xj, this becomes the U -statistic that Randles and Wolfe call U4 and is closely
related to the Wilcoxon signed-rank statistic. We get that n(U
2 ) converges in
distribution as n → ∞ to N (0, 1/3).
(n) − 1
(x=0)
√
1
� | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
∞ to N (0, 1/3).
(n) − 1
(x=0)
√
1
√
√
If we included all the terms with i = j in the sum defining the U -statistic, giving
(n), it would make a difference of O(n) in the sum, thus O(1/n) in U (n),
another statistic V√
√
thus O(1/ n) in nU (n), so n(V (n) − 1
2 ) also has a distribution converging to N (0, 1/3).
In other words, V (n) = 2 + Zn/ 3n + op(1/ n) where Zn converges in distribution to
N (0, 1) as n → ∞.
)
n
(
The Hodges-Lehmann estimate θˆHL is an x for which V(
)
x
= 2 + O(1√/n2). For x near
0, specifically |x| = O(1/ n), Ehx = G(2x) which will be within O(1/ n) of 1/2. The
)
n
(
asymptotic variance of V(
will still be 1/(3n) plus smaller terms that don’t affect the
)
x
asymptotic distribution. So we will have, where again Zn is asymptotically N (0, 1),
√
√
1
= G(2x) + Zn/ 3n + op(1/
n).
√
√
(n)
(x)
V
If this equals 1/2 (within O(1/n2)), then
�
θˆHL = x =
1
G←
2
1
2
�
− (Zn/ 3n) + op(1/ n).
√
√ | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
←
2
1
2
�
− (Zn/ 3n) + op(1/ n).
√
√
It follows by the delta-method that the distribution of
to N (0, σ2) where
√
n(θˆHL − m) =
σ2 = (G← )(cid:1)(1/2)2/12 = 1/(12G(cid:1)(0)2) = 1/(12g(0)2),
√
ˆ
nθHL converges
and by convolution g(0) = −∞ f (0 − x)f (x)dx =
asymptotic variance of the Hodges-Lehmann statistic is 1/[12n{
cated by Randles and Wolfe on p. 228, (7.3.12) and (7.3.14).
� ∞ f (x)2dx by symmetry. So the
� ∞ f (x)2 dx}2], as indi-
−∞
−∞
� ∞
Note. We considered a family of U -statistics indexed by a parameter x. There is a theory
of such families, called U -processes, begun in some papers by Deborah Nolan and David
is non-decreasing in x, we
Pollard in Annals of Statistics. In the present case, since U
have a relatively simple U -process, but still, the argument was incomplete.
(n)
(x)
3 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/11b322ac770ab24cabb34a45a6365d94_delta_asympt.pdf |
MIT 3.071
Amorphous Materials
3: Glass Forming Theories
Juejun (JJ) Hu
1
After-class reading list
Fundamentals of Inorganic Glasses
Ch. 3 (except section 3.1.4)
Introduction to Glass Science and Technology
Ch. 2
3.022 nucleation, precipitation growth and interface
kinetics
Topological constraint theory
M. Thorpe, “Continuous deformations in random networks”
J. Mauro, “Topological constraint theory of glass”
2
Glass formation from liquid
V, H
Supercooled
liquid
Liquid
Glass
transition
Glass
Crystal
? Supercooling of liquid
and suppression of
crystallization
? Glass transition: from
supercooled liquid to
the glassy state
? Glass forming ability:
the structural origin
Tf
Tm
T
3
Glass forming theories
The kinetic theory
Nucleation and growth
“All liquids can be vitrified provided that the rate of
cooling is fast enough to avoid crystallization.”
Laboratory glass transition
Potential energy landscape
Structural theories
Zachariasen’s rules
Topological constraint theory
4
Crystallization is the opposite of glass formation
Image is in the public domain.
Crystallized
Amorphous
Suspended Changes in Nature, Popular Science 83 (1913).
Thermodynamics of nucleation
G
Liquid
Crystal
When T < Tm,
T
Tm
Driving force for nucleation
6
lsGGHTSGSTlsSSlsGGTT0smlslGGVTTSGThermodynamics of nucleation
G
W
Homogeneous
nucleation
Heterogeneous
nucleation
Size
Surface energy contribution
Energy barrier for nucleation | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
��G
W
Homogeneous
nucleation
Heterogeneous
nucleation
Size
Surface energy contribution
Energy barrier for nucleation
7
SSGSSGsmlVGTTSlsSGGGKinetics of nucleation
G
W
Nucleation rate:
Size
8
SSGsmlVGTTSlsSGGGexpnBWRDkTexpDBEDDkTexpDnBEWRDkT: , 0mnTTWR0: 0nTRKinetics of growth
Nucleus
Atom
Flux into the nucleus:
Flux out of the nucleus:
9
expBEFkTexpBEkGFTKinetics of growth
Nucleus
Atom
Net diffusion flux:
10
exp1exp~expgBBBBRFFEkTkTGGEkTkT� | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
: 0, 0mgGTTR0: 0nTRCrystal nucleation and growth
Metastable
zone of
supercooling
Tm
Driving force:
supercooling
Both processes
are thermally
activated
11
Time-temperature-transformation diagram
Critical
cooling rate Rc
Driving force
(supercooling)
limited
Diffusion
limited
R. Busch, JOM 52, 39-42 (2000)
12
TCritical cooling rate and glass formation
Material
Silica
GeO2
Na2O·2SiO2
Salol
Water
Vitreloy-1
Typical metal
Silver
Critical cooling
rate (°C/s)
9 × 10-6
3 × 10-3
6 × 10-3
10
107
1
109
1010
Technique
Air quench
Liquid quench
Droplet spray
Melt spinning
Selective laser
melting
Typical
cooling rate
(°C/s)
1-10
103
102-104
105-108
106-108
Vapor deposition
Up to 1014
Maximum glass sample thickness:
: thermal diffusivity
13
max~cTdRGlass formation from liquid
V, H
Supercooled
liquid
Liquid
Increasing
cooling rate
3
2
1
Glasses obtained at
different cooling rates
have different structures
With infinitely slow
cooling, the ideal glass
state is obtained
Tm
T
14
Potential energy landscape (PEL | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
With infinitely slow
cooling, the ideal glass
state is obtained
Tm
T
14
Potential energy landscape (PEL)
The metastable glassy state
E
Metastable
glassy state
Thermodynamically
stable crystalline state
Structure
15
Potential energy landscape (PEL)
PE
Ideal glass
Laboratory
glass states
Crystal
Atomic coordinates r1, r2, … r3N
16
Glass
Liquid
Laboratory glass transition
Liquid: ergodic
Glass: nonergodic,
confined to a few
local minima
Inter-valley
transition time t :
B : barrier height
: attempt frequency
17
obstt1expBBkTt
Glass former: high valence
state, covalent bonding with O
Modifier: low valence state,
ionic bonding with O
Network modifiers
Glass formers
Intermediates
18
Zachariasen’s rules
Rules for glass formation in an oxide AmOn
An oxygen atom is linked to no more than two atoms of A
The oxygen coordination around A is small, say 3 or 4
Open structures with covalent bonds
Small energy difference between glassy and crystalline states
The cation polyhedra share corners, not edges, not faces
Maximize structure geometric flexibility
At least three corners are shared
Formation of 3-D network structures
Only applies to most (not all!) oxide glasses
Highlights the importance of network topology
19
Classification of glass network topology
Floppy / flexible
Underconstrained
Isostatic
Critically constrained
Stressed rigid
Overconstrained
• # (constraints) <
• # (constraints) =
• # (constraints) >
# (DOF)
# (DOF)
# (DOF)
• Low barrier against
• Optimal for glass
crystallization
formation
• Crystalline clusters
(nuclei) readily
form | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
• Low barrier against
• Optimal for glass
crystallization
formation
• Crystalline clusters
(nuclei) readily
form and percolate
PE
PE
PE
r1, r2, … r3N
r1, r2, … r3N
r1, r2, … r3N
20
Number of constraints
Denote the atom coordination number as r
Bond stretching constraint:
Bond bending constraint:
One bond angle is defined when r = 2
Orientation of each additional bond is specified by two angles
Total constraint number:
Mean coordination number:
21
/2r23r(2)r222.532.53rrrrn22rrrrn
Isostatic condition / rigidity percolation threshold
Total number of degrees of freedom:
Isostatic condition:
Examples:
GexSe1-x
AsxS1-x
SixO1-x
Why oxides and chalcogenides make good glasses?
22
23rn2232.532.4rrnrn r41222rxxx3122rxxx41222rxxx
Temperature-dependent constraints
The constraint number should be evaluated at the glass
forming temperature (rather than room temperature)
Silica glass SixO1-x
Bond stretching
O-Si-O bond angle
Isostatic condition
SiO2
n
o
i
t
u
b
i
r
t
s | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
angle
Isostatic condition
SiO2
n
o
i
t
u
b
i
r
t
s
d
i
d
e
z
i
l
a
m
r
o
N
Si-O-Si bond angle in silica glass
23
22SiOSiOrrnn23SiSirn13x
Temperature-dependent constraints
Each type of constraint is associated with an onset
temperature above which the constraint vanishes
“Topological constraint theory of glass,” ACerS Bull. 90, 31-37 (2011).
24
Enumeration of constraint number
Bond stretching constraints (coordination number):
8-N rule: applies to most covalently bonded nonmetals (O, S, Se,
P, B, As, Si, etc.)
Exceptions: heavy elements (e.g. Te, Sb)
Bond bending constraints:
Glasses with low forming temperature:
Atomic modeling or experimental
characterization required to ascertain
the number of active bond bending
constraints
25
#23BBr
Property dependence on network rigidity
Many glass properties exhibit extrema or kinks at the
rigidity percolation threshold
J. Non-Cryst. Sol. 185, 289-296 (1995).
26
2.4rMeasuring glass forming ability
Figure of merit (FOM):
Tx : crystallization temperature
Tg : glass transition temperature
CP
Tg is dependent on
measurement method
and thermal history
Alternative FOM:
T
Hruby coefficient
Tg
27
xgTTTxgmxTTTT
Summary
Kinetic theory of glass formation
Driving force and energy barrier for nucleation and growth
Temperature dependence of nucleation and growth rates
T-T-T diagram and critical cooling rate
Laboratory glass transition
Potential | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
Temperature dependence of nucleation and growth rates
T-T-T diagram and critical cooling rate
Laboratory glass transition
Potential energy landscape
Ergodicity breakdown: laboratory glass transition
Path dependence of glass structure
Glass network topology theories
Zachariasen’s rules
Topological constraint theory
Parameters characterizing glass forming ability (GFA)
28
MIT OpenCourseWare
http://ocw.mit.edu
3.071 Amorphous Materials
Fall 2015
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/3-071-amorphous-materials-fall-2015/11e638bcfa158b75e2c775ce38d5dc24_MIT3_071F15_Lecture3.pdf |
Session 3: Inventory Analysis
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 31
LCA: Methodology
• Goal & Scope Definition
– What is the unit of analysis?
– What materials, processes, or
products are to be
considered?
• Inventory Analysis
–
Identify & quantify
• Energy inflows
• Material inflows
• Releases
• Impact Analysis
– Relating inventory to impact
on world
Goal &
Scope
Definition
Inventory
Analysis
Impact
Analysis
I
n
t
e
r
p
r
e
t
a
t
i
o
n
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 32
1
Inventory Analysis
• Building a system model of the flows within your system
– System boundaries and flow types defined in Goal &
Scope
– Typically includes only environmentally relevant flows
• E.g., exclude waste heat, water or O2 emissions
• Steps
– Catalog what activities to include (draw a flowchart)
– Data collection
– Computation of flows per unit of analysis
• Serious challenges around allocation
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 33
Flowchart Examples: Intl Al Inst. 2003
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 34
2
Intl Al
Inst.
2003
Figures removed due to copyright restrictions.(cid:13)
Source: Flowcharts on p. 5 and p. 15 in "Life Cycle Assessment of Aluminum:
for the Worldwide Primary Aluminium Industry."
International Aluminium Institute. March 2003.
Inventory Data
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 35
Flowchart Examples:
Intl Al Inst. 2003 | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 35
Flowchart Examples:
Intl Al Inst. 2003
Figures removed due to copyright restrictions.(cid:13)
Source: Flowcharts on p. 5 and p. 15 in "Life Cycle Assessment of Aluminum:
for the Worldwide Primary Aluminium Industry."
International Aluminium Institute. March 2003.
Inventory Data
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 36
3
(cid:10)
(cid:10)
Inventory Analysis: Data collection
• Data collection
–
Inflows
• Materials
• Energy
– Outflows
• Primary product
• Other products
• Releases to land, water
and air
– Transport
• Distance
• Mode
• Data collection (cont.)
– Qualitative
• Description of activity
under analysis
• Geographic location
• Timeframe
• Key issue:
– Site specific vs. Industry Avg
• Data sources
–
– Scientific literature,
Published studies
Industry & government
records
Industry associations
–
– Private consultants
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 37
Calculating the Inventory
• Identify interconnection flows
• Normalize data
– Convert all absolute flows to a quantity relative to one
outflow
– Typically reference flow serves as interconnection
Note: Since LCAs are typically linear, choice of
reference outflow is arbitrary
• Calculate magnitude of interconnection flows
– For linear system, soluble using linear algebra
• Scale all flows relative to interconnection flows
• Sum all equivalent flows
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 38
4
Product Production Overview
•Product P produced in plant C
– C: Metal sheets cut and pressed to make P
•Plant B delivers metal sheets to plant C
– B: Ingots melted and rolled into sheets
•Ingots come from plant A
– A: Mineral is extracted, turned | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
plant C
– B: Ingots melted and rolled into sheets
•Ingots come from plant A
– A: Mineral is extracted, turned into metal, cast
into ingots
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 39
Product Production Details
• Transport:
– A to B: 1000 km, by truck
– B to C: 0 km (adjacent)
• Scrap:
– Process scrap from C returned to B for remelting
• Product P:
– Weight = 40 g
– 6 m2 metal sheet needed to make 1,000
– Metal thickness = 1.0 mm
– Metal Density = 8,000 kg/m3
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 40
5
Environmental Data – Plant A
Summary
Products
Raw Material
Metal ingots
Mineral
Inputs/Outputs
Description
Total Annual Production
Use of raw material
Use of energy in the process
Emissions to air
Emissions to water
Non-hazardous solid waste
Quantity
1200
4800
Units
Details
tonnes/year Product A
tonnes/year Raw A
6.00E+06 MJ/year
kg/year
kg/year
tonnes/year Solid Waste
Oil Combustion
HCl
Cu
600
600
3800
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 41
Environmental Data – Plant B
Summary
Products
Raw Material
Inputs/Outputs
Description
Total Annual Production
Use of raw material - ingots
Use of raw material - scrap
Use of energy - heating
Use of energy - rolling
Emissions to air
Metal Sheets
Metal ingots and process scrap
Quantity
1600
900
700
Units
Details
tonnes/year Sheets
tonnes/year Ingots
tonnes/year Scrap
5.63E+05 kWh/year
3.26E+05 kWh/year
480 | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
/year Ingots
tonnes/year Scrap
5.63E+05 kWh/year
3.26E+05 kWh/year
480
kg/year
Electricity
Electricity
HC
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 42
6
Environmental Data – Plant C
Summary
Products
Raw Material
Consumer Product P
Metal Sheets
Inputs/Outputs
Description
Total Annual Production
Use of raw material
Use of energy - oil
Use of energy - electricity
Emissions to air
Process Scrap for Recycling
Quantity
400
480
Units
Details
tonnes/year Product P
tonnes/year Sheets
3.00E+05 MJ/year
2.22E+05 kWh/year
250
80
kg/year
tonnes/year Scrap
Oil
Electricity
HC
Massachusetts Institute of T
Department of Materials Scie
echnology
nce & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 43
Environmental Data –
Transportation and Energy Production
Transportation – Diesel Fuel
Transportation – Diesel Fuel
Energy
Driving Conditions Energy Consumption
Long Haul
City Traffic
1
2.7
Units
MJ/tonne-km
MJ/tonne-km
Energy Production Emissions
Energy Production Emissions
Emissions (g/MJ fuel consumed)
Diesel
Substance
0.208
HC
1.3
NOx
78.6
CO2
Oil
0.018
0.15
79.8
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 44
7
Flowchart of System Being Analyzed
Mineral
Mineral
(M)
(M)
Metal Ingot
Metal Ingot
Production
Production
Ingots (I)
Ingots
(I)
Transport
PartPart
Production
Production
Product
Product
(P)
(P)
(Scout)
(Scout)
Sheet
Sheet
Production
Production
(Scin)
(Scin)
(Sc*)
(Sc*)
Metal
Metal
Sheets | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
out)
Sheet
Sheet
Production
Production
(Scin)
(Scin)
(Sc*)
(Sc*)
Metal
Metal
Sheets
Sheets
(Sh)
(Sh)
Process
Process
Scrap
Scrap
(Sc)
(Sc)
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 47
Calculating the Inventory
• Identify interconnection flows
• Normalize data
– Convert all absolute flows to a quantity relative to one
outflow
– Typically reference flow serves as interconnection
Note: Since LCAs are typically linear, choice of
reference outflow is arbitrary
• Calculate magnitude of interconnection flows
– For linear system, soluble using linear algebra
• Scale all flows relative to interconnection flows
• Sum all equivalent flows
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 48
8
Results – Total Inventory
A
Transport
B
C
u
f
/
g
300
250
200
150
100
50
0
HCl
Cu
Solid
Waste
(kg)
HC
NOx
CO2
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 49
Results – How Much Energy?
30
250
200
150
100
200
50
0
C
B
Transport
A
40
22
27
Oil
(MJ/fu)
Diesel
(MJ/fu)
Elec
(kWh/fu)
• Totals:
– Oil = 230 MJ / fu
– Diesel = 40 MJ / fu
– Electricity = 49 kWh/fu
• If electrical generation is
50% oil / 50 % Diesel, what
is total energy carrier
consumption?
– 24.5 kWh from Oil
– 24.5 kWh from diesel
• Units Conversion:
1 kWh = 3.6 MJ
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
3.6 MJ
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 50
9
Considering Energy from Electricity
•Although we are consuming 49 kWh of energy,
with 50% from Oil and 50% from Diesel
•We are NOT consuming
49 kWh x 3.6 MJ/kWh = 176 MJ of energy carriers
Why?
•Energy conversion to electricity is far from 100%
efficient
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 51
Considering Energy from Electricity
Conversion Efficiency % (MJ out / MJ in)
45%
40%
35%
30%
25%
20%
15%
10%
5%
0%
Coal Thermal
Combined
Cycle NG
Fuel Oil
Thermal
Diesel
Generation
Gas Turbine
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 52
10
Results – How Much Energy?
Total Energy Carriers Consumed
600
• Assuming a conversion
500
400
300
200
100
0
155
150
200
C
B
Transport
A
efficiency of
– Oil = 32%
– Diesel = 28%
143
171
40
Oil (MJ/fu)
Diesel (MJ/fu)
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 53
Does this matter?
IAI Inventory for 1000 kg of Primary Aluminum
Usage
Unit Energy Content
Total Energy
Consumed
Coal
Diesel Oil
Heavy Oil
Natural Gas
Total Thermal
186 kg
13 kg
238 kg
308 m3
Electricity
15711 kWh
Total
32.5 MJ / kg
48 MJ / kg
42 MJ / kg
41 MJ / m3
MJ
w/o efficiency (MJ)
w/ efficiency (MJ)
w/o efficiency (MJ) | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
MJ / kg
41 MJ / m3
MJ
w/o efficiency (MJ)
w/ efficiency (MJ)
w/o efficiency (MJ)
w/ efficiency (MJ)
6,045
624
9,996
12,628
29,293
56,560
171,393
85,853
200,686
Ignoring efficiency of electrical conversion,
drastically alters energy picture!
Massachusetts Institute of Technology
Department of Materials Science & Engineering
ESD.123/3.560: Industrial Ecology – Systems Perspectives
Randolph Kirchain
Introduction: Slide 54
11 | https://ocw.mit.edu/courses/esd-123j-systems-perspectives-on-industrial-ecology-spring-2006/11ecb61a034ff68951ffbccbf7368cde_lec11.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
18.969 Topics in Geometry: Mirror Symmetry
Spring 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIRROR SYMMETRY: LECTURE 6
DENIS AUROUX
1. The Quintic 3-fold and Its Mirror
The simplest Calabi-Yau’s are hypersurfaces in toric varieties, especially smooth
hypersurfaces X in CPn+1 defined by a polynomial of degree d = n + 2, i.e. a
section of OPn+1 (d). Smoothness implies that N X ∼
X , defined by
v �→ �vP = dP (v), so T Pn+1|X = T X ⊕ N X = T X ⊕ OPn+1 (d)|X (“adjunc
tion”). Passing to the dual and taking the determinant, we obtain
Pn+1 = Ωn
|X
|
X ⊗ OPn+1 (−d)|X
→ OPn+1 (d)|
Ωn+1
(1)
∼
Now:
(2)
T�Pn+1 ⊕ C = Hom(�, �⊥) ⊕ Hom(�, �) = Hom(�, Cn+2) = Hom(O(−1)�, Cn+2)
implying that T Pn+1
= O(1)n+2 . Again, passing to the dual and taking the
determinant, we obtain
⊕ O ∼
(3)
We finally have
Ωn+1
Pn+1 ⊗ O ∼
= O(−1)⊗(n+2) = O(−(n + 2))
(4)
OPn+1 (−(n + 2)) X = Ωn
| ∼
X ⊗ OPn+1 (−d)| ⇒
X | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
(n + 2)) X = Ωn
| ∼
X ⊗ OPn+1 (−d)| ⇒
X = Ωn
∼
X = O
if d = n + 2, i.e. our X is indeed Calabi-Yau.
Example. Cubic curves in P2 correspond to elliptic curves (genus 1, isomorphic
to tori), while quartic surfaces in P3 are K3 surfaces.
The quintic in P4 is the world’s most studied Calabi-Yau 3-fold. The coho
mology of the quintic can be computed via the Lefschetz hyperplane theorem:
(CP4) for r < n = 3, so H1(X) = 0, H2(X) =
inclusion induces i
H2(CP4) = Z. Thus, h1,0 = 0 and h2,0 = 0: by argument seen before, h1,1 = 1.
Moreover,
(X) ∼
→ Hr
∗ : Hr
(5)
By working out c(T P4)|X = c(T X)c(OP4 (5))|X (from adjunction), we have
χ(X) = e(T X) [X] = c3(T X) [X]
·
·
(6)
c(T P4) = c(T P4 ⊕ O) = c(O(1)⊕5) = (1 + h)5
1
2
DENIS AUROUX
where h = c1(O(1)) is the generator of H2(CP4) and is Poincar´e dual to the
hyperplane. Restricting to X gives
(7)
(1 + h|X )5 = 1 + 5h|X + 10h2|X + 10h3|X = (1 + c1 + c2 + c3)(1 + 5 | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
10h2|X + 10h3|X = (1 + c1 + c2 + c3)(1 + 5h|X)
so c1 = 0, c2 = 10h2|X , c3 = −40h3|X . Thus,
(8)
χ(X) = −40h3
·
[X] = −40([line] ∩ [X]) = −40 5 = −200
·
We conclude that
(9)
h0 + h2 − h3 + h4 + h6 = 1 + 1 − dim H3(X) + 1 + 1 = −200
implying that dim H3 = 204. Since h3,0 = h0,3 = 1, we obtain h1,2 = h2,1 = 101.
In fact, h1,1 = 1, and we have a symplectic parameter given by the area of a
generator of H2(X) (given by the class of a line in H2(P4)). We further have
101 = h2,1 complex parameters: the equation of the quintic gives h0(OP4 (5)) =
� �
9 = 126 dimensions, from which we lose one by passing to projective space, and
5
24 by modding out by Aut(CP4) = P GL(5, C). That is, all complex deformations
are still quintics.
Now we construct the mirror of X. Start with a distinguished family of quintic
3-folds
· · ·
· · ·
5 +
+ x4
: x4) ∈ P4 fψ = x0
|
5 − 5ψx0x1x2x3x4 = 0}
(10) Xψ = {(x0 :
�
ai = 0}/(Z/5Z = {(a, a, a, a, a)}). Then
Let G = {(a0, . . . , a | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
= {(a, a, a, a, a)}). Then
Let G = {(a0, . . . , a4) ∈ (Z/5Z)5
|
G ∼
= (Z/5Z)3 acts on Xψ by (xj )
�→ (xj ξaj ) where ξ = e2πi/5 (fψ is G-invariant
because
aj = 0 mod 5, and (1, 1, 1, 1, 1) acts trivially because the xj are
homogeneous coordinates). Furthermore, Xψ is smooth for ψ generic (i.e. ψ5 =�
1), but Xψ/G is singular: the action has fixed point (x0 :
: x4) ∈ Xψ s.t. at
least two coordinates are 0. This consists of
�
· · ·
• 10 curves Cij , where e.g. C01 = {x0 = x1 = 0, x2
5 = 0} with
stabilizer Z/5 = {(a, −a, 0, 0, 0)}, so C01/G ∼= P1 is the line y2+y3+y4 = 0
in P2 , yi = xi
5 + x3
5 + x4
5, and
• 10 points Pijk, e.g. P0,1,2 = {x0 =
stabilizer (Z/5Z)2, so P012/G = {pt}.
x1
= x2 = 0, x3
5 + x5 = 0} with
4
The singular locus of Xψ/G is the 10 curves Cij = Cij/G ∼= P1 with C ij , C jk, C ik
meeting at the point P ijk.
Next, let Xψ
∨ be the resolution of singularities of (Xψ | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
C ik
meeting at the point P ijk.
Next, let Xψ
∨ be the resolution of singularities of (Xψ/G), i.e. Xψ
∨ π Xψ/G which is an isomorphism outside π−1(
→
∨ smooth and
Cij ).
equipped with a map Xψ
The explicit construction is complicated, and one can use toric geometry to do
it. One can further show that it is a crepant resolution, i.e. the canonical bundle
∨ is a Calabi
KX ∨ = π∗KXψ /G, so the Calabi-Yau condition is preserved and Xψ
Yau 3-fold.
ψ
MIRROR SYMMETRY: LECTURE 6
3
Now C2/(Z/5Z) ∼
= {uv = w5} ⊂ C3 , [x1, x2] �→ [x1
Along C ij (away from P ijk), Xψ/G looks like (C2/(Z/5Z)) × C, (x1, x1, x3) ∼
(ξaxi, ξ−ax2, x3).
5, x1x2]
is an A4 singularity, which can be resolved by blowing up twice, getting four
exceptional divisors. Doing this for each C ij gives 40 divisors. Similarly, resolving
each pijk creates six divisors, for a total of 60 divisors. Thus, X ∨ contains 100
∨) = 101.
new divisors in addition to the hyperplane section, so indeed h1,1(Xψ
Similarly, as we were only able to build a one-parameter family, h2,1(Xψ
∨) = 1,
giving us mirror symmetric Hodge diamonds:
5, x2
ψ
(11 | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
�
∨) = 1,
giving us mirror symmetric Hodge diamonds:
5, x2
ψ
(11)
hij(X) = ⎜
⎜
⎝
⎛
0
1
0
1
0 101
1
0
0
1
101 0
0
1
1
0
⎞
⎛
⎟
⎠
, hij (Xψ
⎟
∨) =
⎜
⎜
⎝
0
1
0 101
1
0
0
1
1
0
1
0
101 0
1
0
⎞
⎟
⎟
⎠
We want to see how mirror symmetry predicts the Gromov-Witten invariants
Nd (the “number of rational curves” nd) of the quintic. For that, we need to
understand the mirror map between the K¨ahler parameter q = exp(2πi � B +iω)
∨ (which will also give, by
on X and the complex parameter ψ on the mirror Xψ
differentiating, an isomorphism H 1,1(X) ∼ H 2,1(X)) as well as calculations of
the Yukawa coupling on H 2,1(Xψ
∨).
→
�
1.1. Degenerations and the Mirror Map. Last time, we saw a basis {ei} of
H 2(X, Z) by elements of the K¨ahler cone gives coordinates on the complexified
tiei, the parameter qi = exp(2πiti) ∈ C∗
K¨ahler moduli space: if [B + iω] =
gives the large volume limit as qi
0, Im (ti) → ∞. Physics predicts that
the mirror situation is deg | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
volume limit as qi
0, Im (ti) → ∞. Physics predicts that
the mirror situation is degeneration of a large complex structure limit and that,
near such a limit point, there are “canonical coordinates” on the complex moduli
spaces making it possible to describe the mirror map.
�
→
π
• Degeneration: consider a family X → D2 where for t �
∼
= X (with
varying J) and for t = 0, X0 is typically singular. For instance, consider
the camily of elliptic curves Ct = {y2z = x3 + x2z − tz3} ⊂ CP2 (in affine
coordinates, Ct : y2 = x3 + x2 − t). Ct is a smooth torus for t =�
0, and
nodal at t = 0, obtained by pinching a loop on the torus.
= 0, Xt
• Monodromy: follow the family (Xt) as t varies along the loop in π1(D2 �
{0}, t0) going around the origin. All the Xis are diffeomorphic, and thus
induce a monodromy diffeomorphism φ of Xt0 , defined up to isotopy. This
(Xt0 , Z)). In the above example, φ acts on
in turn induces φ
2:1 CP1 =
H1(Ct0 ) = Z2 by
→
(the Dehn twist): observe that Ct
C ∪ {∞} by projection to x, and the branch points are ∞ plus the roots
of x3 + x2 − t. As t → 0, there is one root near −1 and two near 0, which
rotate as t goes around 0. Letting a be the line between the two roots
∗ ∈ Aut(Hn
�
�
1 1
0 1
4
DENIS AUROUX
near 0 and b be between the root | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
�
�
1 1
0 1
4
DENIS AUROUX
near 0 and b be between the root near −1 and the closest other root, the
monodromy maps a, b to a, b + a.
∼
Remark. Note that this complex parameter t is ad hoc. A more natural way to
describe the degeneration would be to describe Ct as an abstract elliptic curve
Ct = C/Z + τ (t)Z. Then τ (t), or rather exp(2πiτ ), is a better quantity. Equip
Ct with a holomorphic volume form Ωt normalized so Ωt = 1 ∀ t. Then let
�
b Ωt: as t goes around the origin, τ (t) → τ (t) + 1 since b �→ b + a.
τ (t) =
→
0, we still have
Moreover, q(t) = exp(2πiτ (t)) is still single-valued, and as t
� dx
∈ −iR+
Im τ (t) → ∞ and q(t)
0. In the former case, we have a y
tending
�
dx ∈ R+ tending to a constant value, so the ratio goes to +i∞. In
to 0 and
the latter case, q(t) is a holomorphic function of t, and goes around 0 once when
t does, i.e. it has a single root at t = 0. Thus, q is a local coordinate for the
family.
→
�
b y
a
Next time, we will see an analogue of this for a family of Calabi-Yau manifolds. | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/11f2fffa337f9a6ec301c32b1ddb3d98_MIT18_969s09_lec06.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
18.969 Topics in Geometry: Mirror Symmetry
Spring 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
MIRROR SYMMETRY: LECTURE 3
DENIS AUROUX
Last time, we say that a deformation of (X, J) is given by
(1)
{s ∈ Ω0,1(X, T X)|∂s +
1
2
[s, s] = 0}/Diff(X)
To first order, these are determined by Def 1(X, J) = H 1(X, T X), but extending
these to higher order is obstructed by elements of H 2(X, T X). In the Calabi-Yau
case, recall that:
∼
Theorem 1 (Bogomolov-Tian-Todorov). For X a compact Calabi-Yau (Ωn,0 =
OX ) with H 0(X, T X) = 0 (automorphisms are discrete), deformations of X are
unobstructed.
X
Note that, if X is a Calabi-Yau manifold, we have a natural isomorphism
T X ∼= Ωn−1 , v �→ ivΩ, so
X
(2)
and similarly
H 0(X, T X) = H n−1,0(X) ∼
= H 0,1
(3)
H 1(X, T X) = H n−1,1, H 2(X, T X) = H n−1,2
Given a K¨ahler metric, we have a Hodge ∗ operator and L2-adjoints
1. Hodge theory
(4)
and Laplacians
(5)
d∗ = − ∗ d∗, ∂
∗
= − ∗ ∂∗
Δ = dd∗ + d∗d, � = ∂∂
∗
+ ∂
∗
∂
Every (d/∂)-cohom | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
d, � = ∂∂
∗
+ ∂
∗
∂
Every (d/∂)-cohomology class contains a unique harmonic form, and one can
show that � = 1 Δ. We obtain
2
(6)
dR
H k (X, C) ∼= Ker (Δ : Ωk(X, C) �) = Ker (� : Ωk �)
H p,q (X)
Ker (� : Ωp,q �) ∼
=
�
�
∼
=
∂
p+q=k
p+q=k
1
2
DENIS AUROUX
The Hodge ∗ operator gives an isomorphism H p,q ∼= H n−p,n−q. Complex con
jugation gives H p,q ∼= H q,p, giving us a Hodge diamond
· · ·
· · ·
hn,n
hn−1,n
hn,n−1
hn−1,n−1
· · ·
(7)
.
..
.
.
.
hn,0
.
..
.
.
.
· · ·
For a Calabi-Yau, we have
h0,n
.
.
.
.
..
. .
.
.
..
. . .
· · ·
h1,1
h0,1
· · ·
h1,0
h0,0
(8)
H p,0 ∼
= H n,n−p = H n−p(X, Ωn = H n−p(X, OX ) = H 0,n−p ∼
X ) ∼
Specifically, for a Calabi-Yau 3-fold with h1,0 = 0, we have a reduced Hodge
diamond
= H n−p,0
∂
∂
(9)
1
0
0
1
0
0
h1,1
h2,1
h2,1
h1,1
0
0
1
0
0
1
Mirror symmetry says that there is | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
2,1
h1,1
0
0
1
0
0
1
Mirror symmetry says that there is another Calabi-Yau manifold whose Hodge
diamond is the mirror image (or 90 degree rotation) of this one.
There is another interpretation of the Kodaira-Spencer map H 1(X, T X) ∼=
H n−1,1 . For X = (X, Jt)t∈S a family of complex deformations of (X, J), c1(KX ) =
∼ OX
under the assumption H 1(X) = 0,
−c1(T X) = 0 implies that Ωn
=
so we don’t have to worry about deforming outside the Calabi-Yau case. Then
∃[Ωt] ∈ H n,0(X) ⊂ H n(X, C). How does this depend on t? Given ∂ ∈ T0S, ∂t
∈
∂Ω t
Ωn,0 ⊕ Ωn−1,1 by Griffiths transversality:
(X,Jt)
∂t
Jt
(10)
αt ∈ Ωp,q =
Jt
⇒
∂
∂t
αt ∈ Ωp,q + Ωp−1,q+1 + Ωp+1,q−1
MIRROR SYMMETRY: LECTURE 3
3
Since ∂Ωt |
∂t t=0
is d-closed (dΩt = 0), ( ∂Ωt |t=0)(n−1,1) is ∂-closed, while
∂t
(11)
Ωt |t=0)(n−1,1) = 0
∂
Ωt |t=0)(n−1,1) + ∂(
∂
∂(
∂t
∂t
Thus, ∃[( ∂Ω | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
,1) + ∂(
∂
∂(
∂t
∂t
Thus, ∃[( ∂Ωt |t=0)(n−1,1)] ∈ H n−1,1(X).
∂t
For fixed Ω0, this is independent of the choice of Ωt. If we rescale f (t)Ωt,
∂
∂t
∂f
(f (t)Ωt) = Ωt + f (t)
∂t
∂Ωt
∂t
→
0, the former term is (n, 0), while for the latter, f (0) scales linearly
(12)
Taking t
with Ω0 .
(13)
H n−1,1(X) = H 1(X, Ωn−1) ∼= H 1(X, T X)
and the two maps T0S → H n−1,1(X), H 1(X, T X) agree. Hence, for θ ∈ H 1(X, T X)
a first-order deformation of complex structure, θ Ω ∈ H 1(X, Ωn ⊗ T X) =
H n−1,1(X) and (the Gauss-Manin connection) [�θΩ](n−1,1) ∈ H n−1,1(X) are the
same. We can iterate this to the third-order derivative: on a Calabi-Yau three
fold, we have
·
X
X
�
�
(14)
�θ1, θ2, θ3� =
Ω ∧ (θ1 · θ2 · θ3 · Ω) =
Ω ∧ (�θ1 �θ2 �θ3 Ω)
where the latter wedge is of a (3, 0) and a (0, 3) form.
X
X
2. Pseudoholomorphic curves
(reference: McDuff-Salamon) Let (X 2n, ω) be a symplectic manifold, J a com
·
patible almost-complex structure, ω( , J | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
ω) be a symplectic manifold, J a com
·
patible almost-complex structure, ω( , J ) the associated Riemannian metric.
Furthermore, let (Σ, j) be a Riemann surface of genus g, z1, . . . , zk ∈ Σ market
points. There is a well-defined moduli space Mg,k = {(Σ, j, z1, . . . , zk)} modulo
biholomorphisms of complex dimension 3g − 3 + k (note that M0,3 = {pt}).
·
X is a J-holomorphic map if J du = du J, i.e.
Definition 1. u : Σ
∂J u = (du + Jduj) = 0. For β ∈ H2(X, Z), we obtain an associated moduli
space
→
1
2
◦
◦
(15) Mg,k(X, J, β) = {(Σ, j, z1, . . . , zk), u : Σ → X|u∗[Σ] = β, ∂J u = 0}/ ∼
where ∼ is the equivalence given by φ below.
(16)
φ =∼
Σ, z1, . . . , zk
u
� X
�����������
u�
Σ�, z1
�
� , . . . , zk
This space is the zero set of the section ∂J of E → Map(Σ, X)β × Mg,k, where E
is the (Banach) bundle defined by Eu = W r,p(Σ, Ω0,1 ⊗ u∗T X).
Σ
�
�
�
�
�
4
DENIS AUROUX
We can define a linearized operator
D∂ : W r+1,p(Σ, u∗T X) × T Mg,k → W r,p(� | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
D∂ : W r+1,p(Σ, u∗T X) × T Mg,k → W r,p(Σ, Ω0,1 ⊗ U ∗T X)
Σ
(17) D∂ (v, j�) =
1
(�v + J�vj + (�vJ) · du · j + J · du · j�)
2
= ∂v +
1
2
(�vJ)du · j +
1
2
J · du · j�
This operator is Fredholm, with real index
(18)
indexRD∂ := 2d = 2�c1(T X), β� + n(2 − 2g) + (6g − 6 + 2k)
One can ask about transversality, i.e. whether we can ensure that D∂ is onto at
every solution. We say that u is regular if this is true at u: if so, Mg,k(X, J, β)
is smooth of dimension 2d.
Definition 2. We say that a map Σ X is simple (or “somewhere injective”)
if ∃z ∈ Σ s.t. du(z) = 0
and u−1(u(z)) = {z}.
→
Note that otherwise u will factor through a covering Σ → Σ�. We set M∗
g,k(X, J, β)
to be the moduli space of such simple curves.
Theorem 2. Let J (X, ω) be the set of compatible almost-complex structures on
X: then
(19)
J reg(X, β) = {J ∈ J (X, ω)| every simple J-holomorphic curve in class β is regular}
is a Baire subset in J (X, ω), and for J ∈ J reg(X, β), M∗ (X, J, β) is smooth
(as an orbifold, if Mg,k is an orbifold) of real dimension 2d and carries a natural
orientation.
g,k | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
ifold, if Mg,k is an orbifold) of real dimension 2d and carries a natural
orientation.
g,k | https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/1232012395d30aca2bf98aacbb7ad036_MIT18_969s09_lec03.pdf |
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
6.001 Notes: Section 5.1
Slide 5.1.1
In this lecture we are going to continue with the theme of
building abstractions. Thus far, we have focused entirely on
procedural abstractions: the idea of capturing a common pattern
of computation within a procedure, and isolating the details of
that computation from the use of the concept within some other
computation. Today we are going to turn to a complementary
issue, namely how to group together pieces of information, or
data, into abstract structures. We will see that the same general
theme holds: we can isolate the details of how the data are glued
together from the use of the aggregate data structure as a
primitive element in some computation. We will also see that
the procedures we use to manipulate the elements of a data structure often have an inherent structure that mimics
the data structure, and we will use this idea to help us design our data abstractions and their associated procedures.
Slide 5.1.2
Let's review what we have been looking at so far in the course,
in particular, the idea of procedural abstraction to capture
computations. Our idea is to take a common pattern of
computation, then capture that pattern by formalizing it with a
set of parameters that specify the parts of the pattern that
change, while preserving the pattern inside the body of a
procedure. This encapsulates the computation associated with
the pattern inside a lambda object. Once we have abstracted that
computation inside the lambda, we can then give it a name,
using our define expression, then treat the whole thing as a
primitive by just referring to the name, and use it without
worrying about the details within the lambda.
Slide 5.1.3
This means we can treat the procedure as it if is a kind of black
box.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.1.4
We need to provide it with inputs of a specified type.
Slide 5.1.5
We know by the contract associated with the procedure, that if
we provide inputs of the appropriate type, the procedure will
produce an output of a specified type... | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
by the contract associated with the procedure, that if
we provide inputs of the appropriate type, the procedure will
produce an output of a specified type...
Slide 5.1.6
... and by giving the whole procedure a name, we create this
black box abstraction, in which we can use the procedure
without knowing details. This means that the procedure will
obey the contract that specifies the mapping from inputs to
outputs, but the user is not aware of the details by which that
contract is enforced.
Slide 5.1.7
So let's use this idea to look at a more interesting algorithm than
the earlier ones we've examined. Here, again, is Heron of
Alexandria's algorithm for computing good approximations to
the square root of a positive number. Read the steps carefully, as
we are about to implement them.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.1.8
Now, let's use the tools we seen so far to implement this method.
Notice how the first procedure uses the ideas of wishful
thinking, and recursive procedures to capture the basic idea of
Heron's method. Try is a procedure that takes a current guess
and the x, and captures the top-level idea of the method. It
checks to see if the guess is sufficient. If it is, it simply returns
the value of that guess. If it is not, then it tries again, with a new
guess.
Note how we are using wishful thinking to reduce the problem
to another version of the same problem, and to abstract out the
idea of both getting a new guess and checking for how good the
guess is. These are procedures we can subsequently write, for
example, as shown. Finally, notice how the recursive call to try will use a different argument for guess, since we
will evaluate the expression before substituting into the body.
Also notice the recursive structure of try and the use of the special form if to control the evolution of this
procedure.
The method for improve simply incorporates the ideas from the algorithm, again with a procedure abstraction
to separate out idea of averaging from the procedure for improving the guess.
Finally, notice how we can build a square root procedure on top of the procedure for try.
Slide 5.1.9
If we think of each of these procedures | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
we can build a square root procedure on top of the procedure for try.
Slide 5.1.9
If we think of each of these procedures as its own black box
abstraction, then we can visualize the universe containing these
procedures as shown. Each procedure exists with its own
contract, but each is accessible to the user, simply by referring to
it by name.
While this sounds fine in principle, there is a problem with this
viewpoint. Some of these procedures are general methods, such
as average and sqrt, and should be accessible to the
user, who might utilize them elsewhere. Some of them,
however, such as try or good-enuf?, are really specific
to the computation for square roots. Ideally we would like to
capture those procedures in a way that they can only be used by sqrt but not by other methods.
Slide 5.1.10
Abstractly, this is what we would like to do. We would like to
move the abstractions for the special purpose procedures inside
of the abstraction for sqrt so that only it can use them, while
leaving more generally useful procedures available to the user.
In this way, these internal procedures should become part of the
implementation details for sqrt but be invisible to outside
users.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.1.11
And here is how to do this.
Note that the definition of sqrt bind this name to a lambda.
Within the bounds of that lambda we have moved the
definitions for improve, good-enuf?, and sqrt
iter (which is what we have renamed try). By moving
these procedures inside the body of the lambda, they become
internal procedures, accessible only to other expressions within
the body of that lambda. That is, if we try to refer to one of these
names when interacting with the evaluator, we will get an
unbound variable error. But these names can be referenced by
expressions that exist within the scope of this lambda.
The rules of evaluation say that when we apply sqrt to some argument, the body of this lambda will be
evaluated. At that point, the internal definitions are evaluated.
The final expression of the lambda is the expression (sqrt-iter 1.0) which means when sqrt is
applied to some argument, by the substitution model | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
expression of the lambda is the expression (sqrt-iter 1.0) which means when sqrt is
applied to some argument, by the substitution model it will reduce to evaluating this expression, meaning it will
begin the recursive evaluation of guesses for the square root.
Slide 5.1.12
In fact we can stress this by drawing a box around the boundary
of the outermost lambda. Clearly that boundary exactly scopes
the black box abstraction that I wanted.
This is called block structure, which you can find, discussed in
more detail in the textbook.
Slide 5.1.13
Schematically, this means that sqrt contains within it only
those internal procedures that belong to it, and behaves
according to the contract expected by the user, without the user
knowing how those procedures accomplish this contract.
This provides another method for abstracting ideas and isolating
them from other abstractions.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.1.14
So here is the summary of what we have seen in this section.
6.001 Notes: Section 5.2
Slide 5.2.1
So let's take that idea of abstraction and build on it. To set the
stage for what we are about to do, it is useful to think about how
the language elements can be group together into a hierarchy.
At the atomic level, we have a set of primitives. In Scheme,
these include primitive data objects: numbers, strings and
Booleans. And these include built-in, or primitive, procedures:
for numbers, things like *, +, =, >; for strings, things like
string=?, substring; for Booleans, things like and, or, not.
To put these primitive elements together into more interesting
expressions, we have a means of combination, that is, a way of
combining simpler pieces into expressions that can themselves
be treated as elements of other expressions. The most common
one, and the one we have seen in the previous lectures, is procedure application. This is the idea of creating a
combination of subexpressions, nested within a pair of parentheses: the value of the first subexpression is a
procedure, and the expression captures the idea of applying that procedure to the values of the other expressions,
which means as we have seen that | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
expression is a
procedure, and the expression captures the idea of applying that procedure to the values of the other expressions,
which means as we have seen that we substitute the values of the arguments for the corresponding parameters in the
body of the procedure, and proceed with the evaluation. We know that these combinations can themselves be
included within other combinations, and the same rules of evaluation will recursively govern the computation.
Finally, our language has a means of abstraction: a way of capturing computational elements and treating them as
if they were primitives; or said another way, a method of isolating the details of a computation from the use of a
computation. Our first means of abstraction was define, the ability to give a name to an element, so that we could
just use the name, thereby suppressing the details from the use of the object. This ability to give a name to
something is most valuable when used with our second means of abstraction, capturing a computation within a
procedure. This means of abstraction dealt with the idea that a common pattern of computation can be generalized
into a single procedure, which covered every possible application of that idea to an appropriate value. When
coupled with the ability to give a name to that procedure, we engendered the ability to create an important cycle in
our language: we can now create procedures, name them, and thus treat them as if they were themselves primitive
elements of the language. The whole goal of a high-level language is to allow us to suppress unnecessary detail in
this manner, while focusing on the use of a procedural abstraction to support some more complex computational
design.
Today, we are going to generalize the idea of abstractions to include those that focus on data, rather than
procedures. So we are going talk about how to create compound data objects, and we are going to examine standard
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
procedures associated with the manipulation of those data structures. We will see that data abstractions mirror many
of the properties of procedural abstractions, and we will thus generalize the ideas of compound data into data
abstractions, to complement our procedural abstractions.
Slide 5.2.2
So far almost everything we've seen in Scheme has revolved
around numbers and computations associated with numbers.
This has been partly deliberate on our part, because we wanted
to focus on the ideas | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
Scheme has revolved
around numbers and computations associated with numbers.
This has been partly deliberate on our part, because we wanted
to focus on the ideas of procedural abstraction, without getting
bogged down in other details. There are, however, clearly
problems in which it is easier to think in terms of other elements
than just numbers, and in which those elements have pieces that
need to be glued together and pulled apart, while preserving the
concept of the larger unit.
Slide 5.2.3
So our goal is to create a method for taking primitive data
elements, gluing them together, and then treating the result as if
it were itself a primitive element. Of course, we will need a way
of "de-gluing" the units, to get back the constituent parts.
What do we mean when we say we want to treat the result of
gluing elements together as a primitive data element? Basically
we want the same properties we had with numbers: we can
apply procedures to them, we can use procedures to generate
new versions of them, and we can create expressions that
include them as simpler elements.
Slide 5.2.4
The most important point when we "glue" things together is to
have a contract associated with that process. This means that we
don't really care that much about the details of how we glue
things together, so long as we have a means of getting back out
the pieces when needed. This means that the "glue" and the
"unglue" work hand in hand, guaranteeing that however, the
compound unit is created, we can always get back the parts we
started with.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.2.5
And ideally we would like the process of gluing things together
to have the property of closure, that is, that whatever we get by
gluing things into a compound structure can be treated as a
primitive so that it can be the input to another gluing operation.
Not all ways of creating compound data have this property, but
the best of them do, and we say they are closed under the
operation of creating a compound object if the result can itself
be a primitive for the same compound data construction process.
Slide 5.2.6
Scheme's basic means for | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
object if the result can itself
be a primitive for the same compound data construction process.
Slide 5.2.6
Scheme's basic means for gluing things together is called
cons, short for constructor, and virtually all other methods
for creating compound data objects are based on cons.
Cons is a procedure that takes two expressions as input. It
evaluates each in turn, and then glues these values together into
something called a pair. Note that the actual pair object is the
value returned by evaluating the cons. The two parts of a cons
pair are called the car and the cdr, and if we apply the
procedures of those names to a pair, we get back the value of the
argument that was evaluated when the pair was created.
Note that there is a contract here between cons, car and cdr, in
which cons glues things together in some arbitrary manner, and all that matters is that when car, for example, is
applied to that object, it gets back out what we started with.
Slide 5.2.7
Note that we can treat a pair as a unit, that is, having built a pair,
we can treat it as a primitive and use it anywhere we might use
any other primitive. So we can pass a pair in as input to some
other data abstraction, such as another pair.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.2.8
In this way, we can create elements that are naturally thought of
as simple units that happen to themselves be created out of
elements that are also thought of as simple units. This allows us
to build up levels of hierarchy of data abstractions.
For example, suppose we want to build a system to reason about
figures drawn in the plane. Those figures might be built out of
line segments that have start and end points, and those points are
built out of x and y coordinates.
Notice how there is a contract between make-point and point-x
or point-y, in which the selectors get out the pieces that are
glued together by the constructor. Because they are built on top
of cons, car and cdr, they inherit the same contract that holds
there. And in the case of segments, these pieces are glued together as if they are primitives, so that we have cons
pairs whose elements are also cons pairs | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
. And in the case of segments, these pieces are glued together as if they are primitives, so that we have cons
pairs whose elements are also cons pairs.
Thus we see how cons pairs have the property of closure, in which the result of consing can be treated as primitive
input to another level of abstraction.
Slide 5.2.9
We can formalize what we have just seen, in terms of the
abstraction of a pair. This abstraction has several standard parts.
First, it has a constructor, for making instances of this
abstraction. The constructor has a kind of contract, in which
objects A and B are glued together to construct a new object,
called a Pair, with two pieces inside.
Second, it has some selectors or accessors to get the pieces back
out. Notice how the contract specifies the interaction between
the constructor and the selectors, whatever is put together can be
pulled back apart using the appropriate selector.
Typically, a data abstraction will also have a predicate, here
called pair?. Its role is to take in any object, and return
true if the object is of type pair. This allows us to test objects for their type, so that we know whether to apply
particular selectors to that object.
The key issue here is the contract between the constructor and the selectors. The details of how a constructor puts
things together are not at issue, so long as however the pieces are glued together, they can be separated back out
into the original parts by the selectors.
Slide 5.2.10
So here we just restate that idea, one more time, stressing the
idea of the contract that defines the interaction between
constructor and selectors. And, we stress one more time the idea
that pairs are closed, that is, they can be input to the operation of
making other pairs.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
6.001 Notes: Section 5.3
Slide 5.3.1
So how do we use the idea of pairs to help us in creating
computational entities? To illustrate this, let’s stick with our
example of points and segments. Suppose we construct a couple
of points, using the appropriate constructor, and we then glue
these points together into a segment.
Slide 5.3.2
Now suppose we want | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
points, using the appropriate constructor, and we then glue
these points together into a segment.
Slide 5.3.2
Now suppose we want to think about the operation of stretching
a point, that is, pulling (or pushing) it along a line from the
origin through the point. Ideally, we would just think about this
in terms of operations on elements of a point, without worrying
about how the point is actually implemented. We do this with
the code shown.
Note how this code creates a new data object. If we stretch
point P1, we get a new point. Also note, as an aside, how a cons
pair prints out, with open and close parentheses, and with the
values of the two parts within those parentheses, separated by a
dot. Thus, the point created by applying our stretch procedure
has a different value for the x and y parts than the original point,
which is still hanging around. Thus, as we might expect from the actual code, we get out the values of the parts of
P1, but then make a new data object with scaled versions of those values as the parts.
Slide 5.3.3
And we can generalize this idea to handle operations on
segments, as well as points. Note how each of these procedures
builds on constructors and selectors for the appropriate data
structure, so that in examining the code, we have no sense of the
underlying implementation. These structures happen to be built
out of cons pairs, but from the perspective of the code designer,
we rely only on the contract for constructors and selectors for
points and segments.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.3.4
Now, suppose we decide that we want to take a group of points
(which might have segments defined between adjacent points)
and manipulate these groups of points. For example, a figure
might be defined as a group of ordered points, with segments
between each consecutive pair of points. And we might want to
stretch that whole group, or rotate it, or do something else to it.
How do we group these things together?
Well, one possibility is just to use a bunch of cons pairs, such as
shown here. But while this is a perfectly reasonable way to glue
things together, it | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
is just to use a bunch of cons pairs, such as
shown here. But while this is a perfectly reasonable way to glue
things together, it is going to be a bear to manipulate. Suppose
we want to stretch all these points? We would have to write
code that would put together the right collections of car’s and
cdr’s to get out the pieces, perform a computation on them, and then glue them back together again. This will be a
royal pain! It would be better if we had a more convenient and conventional way of gluing together groups of
things, and fortunately we do.
Slide 5.3.5
Pairs are a nice way of gluing two things together. However,
sometimes I may want the ability to glue together arbitrary
numbers of things, and here pairs are less helpful. Fortunately,
Scheme also has a primitive way of gluing together arbitrary sets
of objects, called a list, which is a data object with an arbitrary
number of ordered elements within it.
Slide 5.3.6
Of course, we could make a list by just consing together a set of
things, using however many pairs we need. But it is much more
convenient to think of a list as a basic structure, and here is more
formally how we define such a structure. A list is a sequence of
pairs, with the following properties. The car part of a pair in the
list holds the next element of the list. The cdr part of a pair in the
list holds a pointer to the rest of the list. We will also need to tell
when we are at the end of the list, and we have a special symbol,
nil, that signals the fact that there are no more pairs in the list.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.3.7
Another way of saying this is that lists are sequences of pairs
ending in the empty list. Under that view, we see that lists are
closed under the operations of cons and cdr. To see this, note
that if we are given a sequence of pairs ending in the empty list,
and we cons anything onto that sequence, we get another
sequence of pairs ending in the empty list, hence a list.
Similarly, if we take the cdr of a sequence of pairs ending in the | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
another
sequence of pairs ending in the empty list, hence a list.
Similarly, if we take the cdr of a sequence of pairs ending in the
empty list, we get a smaller sequence of pairs ending in the
empty list, hence a list. This property of closure says we can use
lists as primitives within other lists.
The only trick is what happens when I try to take the cdr of an
empty list. The result depends on the Scheme implementation,
as in some cases it is an error, while for other Schemes it is the empty list. The latter view is nice when considering
the closure property as it preserves the notion that the cdr of a list is a list.
Slide 5.3.8
To visualize this new conventional way of collecting elements,
called a list, we use box-and-pointer notation. First, a cons pair
is represented by a pair of boxes. The first box contains a pointer
to the value of the first argument to cons and the second box
contains a pointer to the value of the second argument to cons.
The pair also has a pointer into it, and that pointer is the value
returned by evaluating the cons expression, and represents the
actual pair.
Slide 5.3.9
A list then simply consists of a sequence of pairs, or boxes,
which we conventionally draw in a horizontal line. The car
element of each box points to an element of the sequence, and
the cdr element of each box points to the rest of the list. The
empty list is indicated by a diagonal line in the last cdr box.
One can see that the list is much like a skeleton. The cdrs define
the spine of the skeleton, and hanging off the cars are the ribs,
which contain the elements. Also notice how this visualization
clearly defines the closure property of lists, since taking the cdr
of a list gives us a new sequence of boxes ending the empty list.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.3.10
To check if something is a list, we need two things. First, we
have a predicate, null? that checks to see if an object is the
empty list. Then, to check if a structure is a list, we can just use
pair? to see if the structure is a sequence of pairs. Actually,
we really | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
to check if a structure is a list, we can just use
pair? to see if the structure is a sequence of pairs. Actually,
we really should check to see that the sequence ends in the
empty list, but conventionally we just check to see that the first
element is a pair, that is, something made by cons.
Slide 5.3.11
Since we have built lists out of cons pairs, we can use car and
cdr to get out the pieces. But just for today, we are going to
separate the operations on lists from operations on pairs, by
defining special selectors and constructor for lists, as shown.
Thus we have a way of getting the first and rest of the elements
of a list, and for putting a new element onto the front of a list.
Notice how these operations inherit the necessary closure
properties from their implementation in terms of cons, car and
cdr.
Slide 5.3.12
A key point behind defining a new data abstraction is that it
should make certain kinds of operations easy to perform. We
expect to see that with lists, and in what follows we are going to
explore that idea, looking for standard kinds of operations
associated with lists.
One common operation is the creation of lists. Here is a simple
example of this, which generates a sequence (or list) of numbers
between two points. Notice the nice recursive call within this
procedure. It says, to generate such a list, cons or glue the value
of from onto whatever I get by creating the interval from from
+ 1 to to. This is reducing things to a simpler version of the
same problem, terminating when I just have an empty list.
Notice how the constructor relies on the data abstraction contract. If this procedure works correctly on smaller
problems, then the adjoin operation is guaranteed to return a new list, since it is gluing an element onto a list, and
by closure this is also a list. This kind of inductive reasoning allows us to deduce that the procedure correctly
creates the right kind of data structure.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.3.13
Here is a trace of the substitution model running on an example
of this. Notice how the if clause unwinds this into an adjoining
of an element onto a recursive call to the | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
the substitution model running on an example
of this. Notice how the if clause unwinds this into an adjoining
of an element onto a recursive call to the same procedure with
different arguments. Since we must get the values of the
subexpressions before we can apply the adjoin operation, we
expand the recursive call out again, creating another deferred
adjoin operation. And we keep doing this until we get down to
the base case, returning the value of the empty list. Notice that
this now leaves us with an expression that will create a list, a
sequence terminating in the special symbol for an empty list.
Slide 5.3.14
Now we are ready to evaluate the innermost expression, which
actually creates a cons pair. To demonstrate this, we have drawn
in the pair to show that this is the value returned by adjoin.
Notice how it has the correct form for a list.
Slide 5.3.15
The next evaluation adjoins another element onto this list, with
the cdr pointer of the newly created cons pair pointing to the
value of second argument, namely the previously created list.
Slide 5.3.16
And this of course then leads to this structure. This prints out as
shown, which is the printed form for a list.
Notice the order in which the pairs are created, and notice the set
of deferred operations associated with the creation of this list.
6.001 Structure and Interpretation of Computer Programs. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 5.3.17
In addition to procedures that can "cons up" a list, we have
procedures that can walk down a list, also known as "cdring
down" a list.
Here is a simple example, which finds the nth element of a list,
where by convention a list starts at 0. Notice how we use the
recursive property of a list to do this. If our index is 0, then we
want the first element. Otherwise, we know that the nth element
of a list will be the n-1st element of the rest of the list, by the
closure property of lists. So we can recursively reduce this to a
simpler version of the same problem.
For the example shown, we will first check to see if n=0. Since
it does not, we take the list pointed to by joe | https://ocw.mit.edu/courses/6-001-structure-and-interpretation-of-computer-programs-spring-2005/1239c38467fc8018b19c808d121bcf5d_lecture5webhand.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.