text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
on this machine? We can try to
understand this in terms of the input/output equations. From the definition of the increment
machine, we have
o[t] = i[t] + 1 .
And if we connect the input to the output, then we will have
mFeedback(m)omFeedback2(m)oiChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
140
i[t] = o[t]
.
And so, we have a problem; these equations cannot be satisfied.
A crucial requirement for applying feedback to a machine is: that machine must not have a direct
dependence of its output on its input.
Figure 4.7 Counter made with feedback and serial combination of an incrementer and a delay.
We have already explored a Delay machine, which delays its output by one step. We can delay
the result of our incrementer, by cascading it with a Delay machine, as shown in figure 4.7. Now,
we have the following equations describing the system:
oi[t] = ii[t] + 1
od[t] = id[t − 1]
ii[t] = od[t]
id[t] = oi[t]
The first two equations describe the operations of the increment and delay boxes; the second two
describe the wiring between the modules. Now we can see that, in general,
oi[t] = ii[t] + 1
oi[t] = od[t] + 1
oi[t] = id[t − 1] + 1
oi[t] = oi[t − 1] + 1
that is, that the output of the incrementer is going to be one greater on each time step.
Exercise 4.4.
How could you use feedback and a negation primitive machine (which
is a pure function that takes a Boolean as input and returns the negation
of that Boolean) to make a machine whose output alternates between true
and false.
IncrDelay(0)CounterChapter 4 State Machines
6.01 | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
a machine whose output alternates between true
and false.
IncrDelay(0)CounterChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
141
4.2.3.1 Python Implementation
Following is a Python implementation of the feedback combinator, as a new subclass of SM that
takes, at initialization time, a state machine.
class Feedback (SM):
def __init__(self, sm):
self.m = sm
self.startState = self.m.startState
The starting state of the feedback machine is just the state of the constituent machine.
Generating an output for the feedback machine is interesting: by our hypothesis that the output
of the constituent machine cannot depend directly on the current input, it means that, for the
purposes of generating the output, we can actually feed an explicitly undefined value into the
machine as input. Why would we do this? The answer is that we do not know what the input
value should be (in fact, it is defined to be the output that we are trying to compute).
We must, at this point, add an extra condition on our getNextValues methods. They have to
be prepared to accept ’undefined’ as an input. If they get an undefined input, they should
return ’undefined’ as an output. For convenience, in our files, we have defined the procedures
safeAdd and safeMul to do addition and multiplication, but passing through ’undefined’ if it
occurs in either argument.
So: if we pass ’undefined’ into the constituent machine’s getNextValues method, we must
not get ’undefined’ back as output; if we do, it means that there is an immediate dependence
of the output on the input. Now we know the output o of the machine.
To get the next state of the machine, we get the next state of the constituent machine, by taking
the feedback value, o, that we just computed and using it as input for getNextValues. This
will generate the next state of | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
that we just computed and using it as input for getNextValues. This
will generate the next state of the feedback machine. (Note that throughout this process inp is
ignored—a feedback machine has no input.)
def getNextValues(self, state, inp):
(ignore, o) = self.m.getNextValues(state, ’undefined’)
(newS, ignore) = self.m.getNextValues(state, o)
return (newS, o)
Now, we can construct the counter we designed. The Increment machine, as we saw in its
definition, uses a safeAdd procedure, which has the following property: if either argument is
’undefined’, then the answer is ’undefined’; otherwise, it is the sum of the inputs.
def makeCounter(init, step):
return sm.Feedback(sm.Cascade(Increment(step), sm.Delay(init)))
>>> c = makeCounter(3, 2)
>>> c.run(verbose = True)
Start state: (None, 3)
Step: 0
Feedback_96
Cascade_97
Increment_98 In: 3 Out: 5 Next State: 5
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
142
Delay_99 In: 5 Out: 3 Next State: 5
Step: 1
Feedback_96
Cascade_97
Increment_98 In: 5 Out: 7 Next State: 7
Delay_99 In: 7 Out: 5 Next State: 7
Step: 2
Feedback_96
Cascade_97
Increment_98 In: 7 Out: 9 Next State: 9
Delay_99 In: 9 Out: 7 Next State: 9
Step: 3
Feedback_96
Cascade_97
Increment_98 In: 9 Out: | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
: 9
Step: 3
Feedback_96
Cascade_97
Increment_98 In: 9 Out: 11 Next State: 11
Delay_99 In: 11 Out: 9 Next State: 11
Step: 4
Feedback_96
Cascade_97
Increment_98 In: 11 Out: 13 Next State: 13
Delay_99 In: 13 Out: 11 Next State: 13
...
[3, 5, 7, 9, 11, 13, 15, 17, 19, 21]
(The numbers, like 96 in Feedback_96 are not important; they are just tags generated internally
to indicate different instances of a class.)
Exercise 4.5.
Draw state tables illustrating whether the following machines are differ
ent, and if so, how:
m1 = sm.Feedback(sm.Cascade(sm.Delay(1),Increment(1)))
m2 = sm.Feedback(sm.Cascade(Increment(1), sm.Delay(1)))
4.2.3.2 Fibonacci
Now, we can get very fancy. We can generate the Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, 21, etc),
in which the first two outputs are 1, and each subsequent output is the sum of the two previous
outputs, using a combination of very simple machines. Basically, we have to arrange for the
output of the machine to be fed back into a parallel combination of elements, one of which delays
the value by one step, and one of which delays by two steps. Then, those values are added, to
compute the next output. Figure 4.8 shows a diagram of one way to construct this system.
The corresponding Python code is shown below. First, we have to define a new component ma
chine. An Adder takes pairs of numbers (appearing simultaneously) as input, and immediately
generates their sum as output.
Chapter 4 State Machines
6. | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
of numbers (appearing simultaneously) as input, and immediately
generates their sum as output.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
143
Figure 4.8 Machine to generate the Fibonacci sequence.
class Adder(SM):
def getNextState(self, state, inp):
(i1, i2) = splitValue(inp)
return safeAdd(i1, i2)
Now, we can define our fib machine. It is a great example of building a complex machine out of
very nearly trivial components. In fact, we will see in the next module that there is an interesting
and important class of machines that can be constructed with cascade and parallel compositions
of delay, adder, and gain machines. It is crucial for the delay machines to have the right values
(as shown in the figure) in order for the sequence to start off correctly.
>>> fib = sm.Feedback(sm.Cascade(sm.Parallel(sm.Delay(1),
sm.Cascade(sm.Delay(1), sm.Delay(0))),
Adder()))
>>> fib.run(verbose = True)
Start state: ((1, (1, 0)), None)
Step: 0
Feedback_100
Cascade_101
Parallel_102
Delay_103 In: 1 Out: 1 Next State: 1
Cascade_104
Delay_105 In: 1 Out: 1 Next State: 1
Delay_106 In: 1 Out: 0 Next State: 1
Adder_107 In: (1, 0) Out: 1 Next State: 1
Step: 1
Feedback_100
Cascade_101
Parallel_102
Delay_103 In: 2 Out: 1 Next State: 2
Cascade_104
Delay_105 In: 2 Out: 1 Next State: 2
Delay_106 In: 1 Out: 1 Next State: 1
Adder_107 In: | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
Delay_106 In: 1 Out: 1 Next State: 1
Adder_107 In: (1, 1) Out: 2 Next State: 2
Step: 2
Feedback_100
Cascade_101
Parallel_102
Delay_103 In: 3 Out: 2 Next State: 3
Delay(1)Delay(0)FibonacciDelay(1)+Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
144
Cascade_104
Delay_105 In: 3 Out: 2 Next State: 3
Delay_106 In: 2 Out: 1 Next State: 2
Adder_107 In: (2, 1) Out: 3 Next State: 3
Step: 3
Feedback_100
Cascade_101
Parallel_102
Delay_103 In: 5 Out: 3 Next State: 5
Cascade_104
Delay_105 In: 5 Out: 3 Next State: 5
Delay_106 In: 3 Out: 2 Next State: 3
Adder_107 In: (3, 2) Out: 5 Next State: 5
...
[1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
Exercise 4.6.
What would we have to do to this machine to get the sequence [1, 1, 2,
3, 5, ...]?
Exercise 4.7.
Define fib as a composition involving only two delay components and an
adder. You might want to use an instance of the Wire class.
A Wire is the completely passive machine, whose output is always in
stantaneously equal to its | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
is the completely passive machine, whose output is always in
stantaneously equal to its input. It is not very interesting by itself, but
sometimes handy when building things.
class Wire(SM):
def getNextState(self, state, inp):
return inp
Exercise 4.8.
Use feedback and a multiplier (analogous to Adder) to make a machine
whose output doubles on every step.
Exercise 4.9.
Use feedback and a multiplier (analogous to Adder) to make a machine
whose output squares on every step.
4.2.3.3 Feedback2
The second part of figure 4.6 shows a combination we call feedback2 : it assumes that it takes a
machine with two inputs and one output, and connects the output of the machine to the second
input, resulting in a machine with one input and one output.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
145
Feedback2 is very similar to the basic feedback combinator, but it gives, as input to the constituent
machine, the pair of the input to the machine and the feedback value.
class Feedback2 (Feedback):
def getNextValues(self, state, inp):
(ignore, o) = self.m.getNextValues(state, (inp, ’undefined’))
(newS, ignore) = self.m.getNextValues(state, (inp, o))
return (newS, o)
4.2.3.4 FeedbackSubtract and FeedbackAdd
In feedback addition composition, we take two machines and connect them as shown below:
If m1 and m2 are state machines, then you can create their feedback addition composition with
newM = sm.FeedbackAdd(m1, m2)
Now newM is itself a state machine. So, for example,
newM = sm.FeedbackAdd(sm.R(0), sm.Wire())
makes a machine whose output is the sum of all the inputs it has ever had (remember | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
sm.FeedbackAdd(sm.R(0), sm.Wire())
makes a machine whose output is the sum of all the inputs it has ever had (remember that sm.R is
shorthand for sm.Delay). You can test it by feeding it a sequence of inputs; in the example below,
it is the numbers 0 through 9:
>>> newM.transduce(range(10))
[0, 0, 1, 3, 6, 10, 15, 21, 28, 36]
Feedback subtraction composition is the same, except the output of m2 is subtracted from the
input, to get the input to m1.
Note that if you want to apply one of the feedback operators in a situation where there is only one
machine, you can use the sm.Gain(1.0) machine (defined section 4.1.2.2.1), which is essentially
a wire, as the other argument.
4.2.3.5 Factorial
We will do one more tricky example, and illustrate the use of Feedback2. What if we
wanted to generate the sequence of numbers {1!, 2!, 3!, 4!, . . .} (where k! = 1 2 3 . . . k)?
We can do so by multiplying the previous value of the sequence by a number equal to the
·
·
·
m1m2+m1m2+−Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
146
“index” of the sequence. Figure 4.9 shows the structure of a machine for solving this problem. It
uses a counter (which is, as we saw before, made with feedback around a delay and increment)
as the input to a machine that takes a single input, and multiplies it by the output value of the
machine, fed back through a delay.
Figure 4.9 Machine to generate the Factorial sequence.
Here is how to do it in Python; we take advantage of having defined counter machines to abstract
away from them and use that de� | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
Here is how to do it in Python; we take advantage of having defined counter machines to abstract
away from them and use that definition here without thinking about its internal structure. The
initial values in the delays get the series started off in the right place. What would happen if we
started at 0?
fact = sm.Cascade(makeCounter(1, 1),
sm.Feedback2(sm.Cascade(Multiplier(), sm.Delay(1))))
>>> fact.run(verbose = True)
Start state: ((None, 1), (None, 1))
Step: 0
Cascade_1
Feedback_2
Cascade_3
Increment_4 In: 1 Out: 2 Next State: 2
Delay_5 In: 2 Out: 1 Next State: 2
Feedback2_6
Cascade_7
Multiplier_8 In: (1, 1) Out: 1 Next State: 1
Delay_9 In: 1 Out: 1 Next State: 1
Step: 1
Cascade_1
Feedback_2
Cascade_3
Increment_4 In: 2 Out: 3 Next State: 3
Delay_5 In: 3 Out: 2 Next State: 3
Feedback2_6
Cascade_7
Multiplier_8 In: (2, 1) Out: 2 Next State: 2
Delay_9 In: 2 Out: 1 Next State: 2
Step: 2
Cascade_1
Feedback_2
Cascade_3
Increment_4 In: 3 Out: 4 Next State: 4
Delay_5 In: 4 Out: 3 Next State: 4
IncrDelay(1)Factorial*Delay(1)CounterChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
147
Feedback2_6 | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
State Machines
6.01— Spring 2011— April 25, 2011
147
Feedback2_6
Cascade_7
Multiplier_8 In: (3, 2) Out: 6 Next State: 6
Delay_9 In: 6 Out: 2 Next State: 6
Step: 3
Cascade_1
Feedback_2
Cascade_3
Increment_4 In: 4 Out: 5 Next State: 5
Delay_5 In: 5 Out: 4 Next State: 5
Feedback2_6
Cascade_7
Multiplier_8 In: (4, 6) Out: 24 Next State: 24
Delay_9 In: 24 Out: 6 Next State: 24
...
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
It might bother you that we get a 1 as the zeroth element of the sequence, but it is reasonable
as a definition of 0!, because 1 is the multiplicative identity (and is often defined that way by
mathematicians).
4.2.4 Plants and controllers
One common situation in which we combine machines is to simulate the effects of coupling a
controller and a so-called “plant”. A plant is a factory or other external environment that we
might wish to control. In this case, we connect two state machines so that the output of the plant
(typically thought of as sensory observations) is input to the controller, and the output of the
controller (typically thought of as actions) is input to the plant. This is shown schematically in
figure 4.10. For example, when you | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
of as actions) is input to the plant. This is shown schematically in
figure 4.10. For example, when you build a Soar brain that interacts with the robot, the robot (and
the world in which it is operating) is the “plant” and the brain is the controller. We can build a
coupled machine by first connecting the machines in a cascade and then using feedback on that
combination.
Figure 4.10 Two coupled machines.
As a concrete example, let’s think about a robot driving straight toward a wall. It has a distance
sensor that allows it to observe the distance to the wall at time t, d[t], and it desires to stop at
some distance ddesired. The robot can execute velocity commands, and we program it to use the
following rule to set its velocity at time t, based on its most recent sensor reading:
PlantControllerChapter 4 State Machines
6.01— Spring 2011— April 25, 2011
148
v[t] = K(ddesired − d[t − 1])
.
This controller can also be described as a state machine, whose input sequence is the observed
values of d and whose output sequence is the values of v.
S = numbers
I = numbers
O = numbers
n(s, i) = K(ddesired − i)
o(s) =
s
s0 = dinit
Now, we can think about the “plant”; that is, the relationship between the robot and the world.
The distance of the robot to the wall changes at each time step depending on the robot’s forward
velocity and the length of the time steps. Let δT be the length of time between velocity commands
issued by the robot. Then we can describe the world with the equation:
d[t] = d[t − 1] − δT v[t − 1]
.
which assumes that a positive velocity moves the robot toward the wall (and therefore decreases
the distance). This system can be described as a state machine, whose input sequence is the values
of the robot’s velocity, v, and whose output sequence is the values of its distance to the wall, d.
Finally, we can couple these two systems | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
the robot’s velocity, v, and whose output sequence is the values of its distance to the wall, d.
Finally, we can couple these two systems, as for a simulator, to get a single state machine with no
inputs. We can observe the sequence of internal values of d and v to understand how the system
is behaving.
In Python, we start by defining the controller machine; the values k and dDesired are constants
of the whole system.
k = -1.5
dDesired = 1.0
class WallController(SM):
def getNextState(self, state, inp):
return safeMul(k, safeAdd(dDesired, safeMul(-1, inp)))
The output being generated is actually k * (dDesired - inp), but because this method is go
ing to be used in a feedback machine, it might have to deal with ’undefined’ as an input. It has
no delay built into it.
Think about why we want k to be negative. What happens when the robot is closer to the wall
than desired? What happens when it is farther from the wall than desired?
Now, we can define a class that describes the behavior of the “plant”:
deltaT = 0.1
class WallWorld(SM):
startState = 5
def getNextValues(self, state, inp):
return (state - deltaT * inp, state)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
149
Setting startState = 5 means that the robot starts 5 meters from the wall. Note that the output
of this machine does not depend instantaneously on the input; so there is a delay in it.
Now, we can defined a general combinator for coupling two machines, as in a plant and controller:
def coupledMachine(m1, m2):
return sm.Feedback(sm.Cascade(m1, m2))
We can use it to connect our controller to the world, and run it:
>>> wallSim = coupledMachine(WallController(), WallWorld())
>>> wallSim.run(30)
[ | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
world, and run it:
>>> wallSim = coupledMachine(WallController(), WallWorld())
>>> wallSim.run(30)
[5, 4.4000000000000004, 3.8900000000000001, 3.4565000000000001,
3.088025, 2.77482125, 2.5085980624999999, 2.2823083531249999,
2.0899621001562498, 1.9264677851328122, 1.7874976173628905,
1.6693729747584569, 1.5689670285446884, 1.483621974262985,
1.4110786781235374, 1.3494168764050067, 1.2970043449442556,
1.2524536932026173, 1.2145856392222247, 1.1823977933388909,
1.1550381243380574, 1.1317824056873489, 1.1120150448342465,
1.0952127881091096, 1.0809308698927431, 1.0687912394088317,
1.058472553497507, 1.049701670472881, 1.0422464199019488,
1.0359094569166565]
Because WallWorld is the second machine in the cascade, its output is the output of the whole
machine; so, we can see that the distance from the robot to the wall is converging monotonically
to dDesired (which is 1).
Exercise 4.10.
What kind of behavior do you get with different values of k?
4.2.5 Conditionals
We might want to use different machines depending on something that is happening in the out
side world. Here we describe three different conditional combinators, that make choices, at the
run-time of the machine, about what to do.
4.2.6 Switch
We will start by considering a conditional combinator that runs two machines in parallel, but
decides on every input whether to send the input into | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
conditional combinator that runs two machines in parallel, but
decides on every input whether to send the input into one machine or the other. So, only one of
the parallel machines has its state updated on each step. We will call this switch, to emphasize the
fact that the decision about which machine to execute is being re-made on every step.
Implementing this requires us to maintain the states of both machines, just as we did for parallel
combination. The getNextValues method tests the condition and then gets a new state and
output from the appropriate constituent machine; it also has to be sure to pass through the old
state for the constituent machine that was not updated this time.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
150
class Switch (SM):
def __init__(self, condition, sm1, sm2):
self.m1 = sm1
self.m2 = sm2
self.condition = condition
self.startState = (self.m1.startState, self.m2.startState)
def getNextValues(self, state, inp):
(s1, s2) = state
if self.condition(inp):
(ns1, o) = self.m1.getNextValues(s1, inp)
return ((ns1, s2), o)
else:
(ns2, o) = self.m2.getNextValues(s2, inp)
return ((s1, ns2), o)
Multiplex
The switch combinator takes care to only update one of the component machines; in some other
cases, we want to update both machines on every step and simply use the condition to select the
output of one machine or the other to be the current output of the combined machine.
This is a very small variation on Switch, so we will just implement it as a subclass.
class Mux (Switch):
def getNextValues(self, state, inp):
(s1, s2) = state
(ns1, o1) = self.m1.getNextValues(s1, inp)
(ns2, o2 | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
= state
(ns1, o1) = self.m1.getNextValues(s1, inp)
(ns2, o2) = self.m2.getNextValues(s2, inp)
if self.condition(inp):
return ((ns1, ns2), o1)
else:
return ((ns1, ns2), o2)
Exercise 4.11.
What is the result of running these two machines
m1 = Switch(lambda inp: inp > 100,
Accumulator(),
Accumulator())
m2 = Mux(lambda inp: inp > 100,
Accumulator(),
Accumulator())
on the input
[2, 3, 4, 200, 300, 400, 1, 2, 3]
Explain why they are the same or are different.
If
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
151
Feel free to skip this example; it is only useful in fairly complicated contexts.
The If combinator. It takes a condition, which is a function from the input to true or false,
and two machines. It evaluates the condition on the first input it receives. If the value
is true then it executes the first machine forever more; if it is false, then it executes the second
machine.
This can be straightforwardly implemented in Python; we will work through a slightly simplified
version of our code below. We start by defining an initializer that remembers the conditions and
the two constituent state machines.
class If (SM):
startState = (’start’, None)
def __init__(self, condition, sm1, sm2):
self.sm1 = sm1
self.sm2 = sm2
self.condition = condition
Because this machine does not have an input available at start time, it can not decide whether it
is going to execute sm1 or sm2. Ultimately, the state of the If machine will be a pair of values:
the first will indicate which constituent | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
execute sm1 or sm2. Ultimately, the state of the If machine will be a pair of values:
the first will indicate which constituent machine we are running and the second will be the state
of that machine. But, to start, we will be in the state (’start’, None), which indicates that the
decision about which machine to execute has not yet been made.
Now, when it is time to do a state update, we have an input. We destructure the state into its
two parts, and check to see if the first component is ’start’. If so, we have to make the decision
about which machine to execute. The method getFirstRealState first calls the condition on
the current input, to decide which machine to run; then it returns the pair of a symbol indicating
which machine has been selected and the starting state of that machine. Once the first real state is
determined, then that is used to compute a transition into an appropriate next state, based on the
input.
If the machine was already in a non-start state, then it just updates the constituent state, using the
already-selected constituent machine. Similarly, to generate an output, we have use the output
function of the appropriate machine, with special handling of the start state.
startState = (’start’, None)
def __init__(self, condition, sm1, sm2):
self.sm1 = sm1
self.sm2 = sm2
self.condition = condition
def getFirstRealState(self, inp):
if self.condition(inp):
return (’runningM1’, self.sm1.startState)
else:
return (’runningM2’, self.sm2.startState)
def getNextValues(self, state, inp):
(ifState, smState) = state
if ifState == ’start’:
(ifState, smState) = self.getFirstRealState(inp)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
152
if ifState == ’runningM1’:
(newS, o) = self.sm1.getNextValues(smState, inp)
return | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
ifState == ’runningM1’:
(newS, o) = self.sm1.getNextValues(smState, inp)
return ((’runningM1’, newS), o)
else:
(newS, o) = self.sm2.getNextValues(smState, inp)
return ((’runningM2’, newS), o)
4.3 Terminating state machines and sequential compositions
So far, all the machines we have discussed run forever; or, at least, until we quit giving them
inputs. But in some cases, it is particularly useful to think of a process as consisting of a sequence
of processes, one executing until termination, and then another one starting. For example, you
might want to robot to clean first room A, and then clean room B; or, for it to search in an area
until it finds a person and then sound an alarm.
Temporal combinations of machines form a new, different PCAP system for state machines. Our
primitives will be state machines, as described above, but with one additional property: they will
have a termination or done function, d(s), which takes a state and returns true if the machine has
finished execution and false otherwise.
Rather than defining a whole new class of state machines (though we could do that), we will
just augment the SM class with a default method, which says that, by default, machines do not
terminate.
def done(self, state):
return False
Then, in the definition of any subclass of SM, you are free to implement your own done method
that will override this base one. The done method is used by state machine combinators that, for
example, run one machine until it is done, and then switch to running another one.
Here is an example terminating state machine (TSM) that consumes a stream of numbers; its
output is None on the first four steps and then on the fifth step, it generates the sum of the numbers
it has seen as inputs, and then terminates. It looks just like the state machines we have | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
��fth step, it generates the sum of the numbers
it has seen as inputs, and then terminates. It looks just like the state machines we have seen before,
with the addition of a done method. Its state consists of two numbers: the first is the number of
times the machine has been updated and the second is the total input it has accumulated so far.
class ConsumeFiveValues(SM):
startState = (0, 0)
# count, total
def getNextValues(self, state, inp):
(count, total) = state
if count == 4:
return ((count + 1, total + inp), total + inp)
else:
return ((count + 1, total + inp), None)
def done(self, state):
(count, total) = state
return count == 5
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
153
Here is the result of running a simple example. We have modified the transduce method of SM
to stop when the machine is done.
>>> c5 = ConsumeFiveValues()
>>> c5.transduce([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], verbose = True)
Start state: (0, 0)
In: 1 Out: None Next State: (1, 1)
In: 2 Out: None Next State: (2, 3)
In: 3 Out: None Next State: (3, 6)
In: 4 Out: None Next State: (4, 10)
In: 5 Out: 15 Next State: (5, 15)
[None, None, None, None, 15]
Now we can define a new set of combin | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
None, None, None, 15]
Now we can define a new set of combinators that operate on TSMs. Each of these combina
tors assumes that its constituent machines are terminating state machines, and are, themselves,
terminating state machines. We have to respect certain rules about TSMs when we do this. In
particular, it is not legal to call the getNextValues method on a TSM that says it is done. This
may or may not cause an actual Python error, but it is never a sensible thing to do, and may result
in meaningless answers.
4.3.1 Repeat
The simplest of the TSM combinators is one that takes a terminating state machine sm and repeats
it n times. In the Python method below, we give a default value of None for n, so that if no value
is passed in for n it will repeat forever.
class Repeat (SM):
def __init__(self, sm, n = None):
self.sm = sm
self.startState = (0, self.sm.startState)
self.n = n
The state of this machine will be the number of times the constituent machine has been executed
to completion, together with the current state of the constituent machine. So, the starting state is
a pair consisting of 0 and the starting state of the constituent machine.
Because we are going to, later, ask the constituent machine to generate an output, we are going
to adopt a convention that the constituent machine is never left in a state that is done, unless
the whole Repeat is itself done. If the constituent machine is done, then we will increment the
counter for the number of times we have repeated it, and restart it. Just in case the constituent
machine “wakes up” in a state that is done, we use a while loop here, instead of an if: we will
keep restarting this machine until the count runs out. Why? Because we promised not to leave
our constituent machine in a done state (so, for example, nobody asks for its output, when its
done), unless the whole repeat machine is done | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
machine in a done state (so, for example, nobody asks for its output, when its
done), unless the whole repeat machine is done as well.
def advanceIfDone(self, counter, smState):
while self.sm.done(smState) and not self.done((counter, smState)):
counter = counter + 1
smState = self.sm.startState
return (counter, smState)
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
154
To get the next state, we start by getting the next state of the constituent machine; then, we check
to see if the counter needs to be advanced and the machine restarted; the advanceIfDone method
handles this situation and returns the appropriate next state. The output of the Repeat machine
is just the output of the constituent machine. We just have to be sure to destructure the state of
the overall machine and pass the right part of it into the constituent.
def getNextValues(self, state, inp):
(counter, smState) = state
(smState, o) = self.sm.getNextValues(smState, inp)
(counter, smState) = self.advanceIfDone(counter, smState)
return ((counter, smState), o)
We know the whole Repeat is done if the counter is equal to n.
def done(self, state):
(counter, smState) = state
return counter == self.n
Now, we can see some examples of Repeat. As a primitive, here is a silly little example TSM. It
takes a character at initialization time. Its state is a Boolean, indicating whether it is done. It starts
up in state False (not done). Then it makes its first transition into state True and stays there. Its
output is always the character it was initialized with; it completely ignores its input.
class CharTSM (SM):
startState = False
def __init__(self, c):
self.c = c
def getNextValues(self, state, inp):
return (True, self.c)
def done(self, state): | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
def getNextValues(self, state, inp):
return (True, self.c)
def done(self, state):
return state
>>> a = CharTSM(’a’)
>>> a.run(verbose = True)
Start state: False
In: None Out: a Next State: True
[’a’]
See that it terminates after one output. But, now, we can repeat it several times.
>>> a4 = sm.Repeat(a, 4)
>>> a4.run()
[’a’, ’a’, ’a’, ’a’]
Exercise 4.12.
Would it have made a difference if we had executed:
>>> sm.Repeat(CharTSM(’a’), 4).run()
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
155
Exercise 4.13.
Monty P. thinks that the following call
>>> sm.Repeat(ConsumeFiveValues(), 3).transduce(range(100))
will generate a sequence of 14 Nones followed by the sum of the first 15
integers (starting at 0). R. Reticulatis disagrees. Who is right and why?
4.3.2 Sequence
Another useful thing to do with TSMs is to execute several different machines sequentially. That
is, take a list of TSMs, run the first one until it is done, start the next one and run it until it is done,
and so on. This machine is similar in style and structure to a Repeat TSM. Its state is a pair of
values: an index that says which of the constituent machines is currently being executed, and the
state of the current constituent.
Here is a Python class for creating a Sequence TSM. It takes as input a list of state machines; it
remembers the machines and number of machines in the list.
class Sequence (SM):
def __init__(self, smList):
self.smList = smList
self.startState = (0, self.smList[0].startState)
self.n = len(smList) | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
smList
self.startState = (0, self.smList[0].startState)
self.n = len(smList)
The initial state of this machine is the value 0 (because we start by executing the 0th constituent
machine on the list) and the initial state of that constituent machine.
The method for advancing is also similar that for Repeat. The only difference is that each time,
we start the next machine in the list of machines, until we have finished executing the last one.
def advanceIfDone(self, counter, smState):
while self.smList[counter].done(smState) and counter + 1 < self.n:
counter = counter + 1
smState = self.smList[counter].startState
return (counter, smState)
To get the next state, we ask the current constituent machine for its next state, and then, if it is
done, advance the state to the next machine in the list that is not done when it wakes up. The
output of the composite machine is just the output of the current constituent.
def getNextValues(self, state, inp):
(counter, smState) = state
(smState, o) = self.smList[counter].getNextValues(smState, inp)
(counter, smState) = self.advanceIfDone(counter, smState)
return ((counter, smState), o)
We have constructed this machine so that it always advances past any constituent machine that is
done; if, in fact, the current constituent machine is done, then the whole machine is also done.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
156
def done(self, state):
(counter, smState) = state
return self.smList[counter].done(smState)
We can make good use of the CharTSM to test our sequential combinator. First, we will try some
thing simple:
>>> m = sm.Sequence([CharTSM(’a’), CharTSM(’b’), CharTSM(’c’)])
>>> m.run()
Start state: | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
’a’), CharTSM(’b’), CharTSM(’c’)])
>>> m.run()
Start state: (0, False)
In: None Out: a Next State: (1, False)
In: None Out: b Next State: (2, False)
In: None Out: c Next State: (2, True)
[’a’, ’b’, ’c’]
Even in a test case, there is something unsatisfying about all that repetitive typing required to
make each individual CharTSM. If we are repeating, we should abstract. So, we can write a func
tion that takes a string as input, and returns a sequential TSM that will output that string. It uses
a list comprehension to turn each character into a CharTSM that generates that character, and then
uses that sequence to make a Sequence.
def makeTextSequenceTSM(str):
return sm.Sequence([CharTSM(c) for c in str])
>>> m = makeTextSequenceTSM(’Hello World’)
>>> m.run(20, verbose = True)
Start state: (0, False)
In: None Out: H Next State: (1, False)
In: None Out: e Next State: (2, False)
In: None Out: l Next State: (3, False)
In: None Out: l Next State: (4, False)
In: None Out: o Next State: (5, False)
In: None Out:
Next State: (6, False)
In: None Out: W Next State: (7, False)
In: None Out: o Next State: (8, False)
In: None Out: r Next State: (9, False)
In: None Out: l Next State: (10, False)
In: None Out: d Next State: (10, | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
l Next State: (10, False)
In: None Out: d Next State: (10, True)
[’H’, ’e’, ’l’, ’l’, ’o’, ’ ’, ’W’, ’o’, ’r’, ’l’, ’d’]
We can also see that sequencing interacts well with the Repeat combinator.
>>> m = sm.Repeat(makeTextSequenceTSM(’abc’), 3)
>>> m.run(verbose = True)
Start state: (0, (0, False))
In: None Out: a Next State: (0, (1, False))
In: None Out: b Next State: (0, (2, False))
In: None Out: c Next State: (1, (0, False))
In: None Out: a Next State: (1, (1, False))
In: None Out: b Next State: (1, (2, False))
In: None Out: c Next State: (2, (0, False))
In: None Out: a Next State: (2, (1, False))
In: None Out: b Next State: (2, (2, False))
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
157
In: None Out: c Next State: (3, (0, False))
[’a’, ’b’, ’c’, ’a’, ’b’, ’c’, ’a’, ’b’, ’c’]
It is interesting to understand the state here. The first value is the number of times the constituent
machine of the Repeat machine has finished executing; the second value is the index of the se
quential machine into its list of machines, and the last Boolean is the state of the CharTSM that is
being executed, which is an | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
se
quential machine into its list of machines, and the last Boolean is the state of the CharTSM that is
being executed, which is an indicator for whether it is done or not.
4.3.3 RepeatUntil and Until
In order to use Repeat, we need to know in advance how many times we want to execute the
constituent TSM. Just as in ordinary programming, we often want to terminate when a particular
condition is met in the world. For this purpose, we can construct a new TSM combinator, called
RepeatUntil. It takes, at initialization time, a condition, which is a function from an input to
a Boolean, and a TSM. It runs the TSM to completion, then tests the condition on the input; if
the condition is true, then the RepeatUntil terminates; if it is false, then it runs the TSM to
completion again, tests the condition, etc.
Here is the Python code for implementing RepeatUntil. The state of this machine has two parts:
a Boolean indicating whether the condition is true, and the state of the constituent machine.
class RepeatUntil (SM):
def __init__(self, condition, sm):
self.sm = sm
self.condition = condition
self.startState = (False, self.sm.startState)
def getNextValues(self, state, inp):
(condTrue, smState) = state
(smState, o) = self.sm.getNextValues(smState, inp)
condTrue = self.condition(inp)
if self.sm.done(smState) and not condTrue:
smState = self.sm.getStartState()
return ((condTrue, smState), o)
def done(self, state):
(condTrue, smState) = state
return self.sm.done(smState) and condTrue
One important thing to note is that, in the RepeatUntil TSM the condition is only evaluated
when the constituent TSM is done. This is appropriate in some situations; but in other cases, we
would like to terminate the execution of a TSM if a condition becomes true | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
done. This is appropriate in some situations; but in other cases, we
would like to terminate the execution of a TSM if a condition becomes true at any single step of
the machine. We could easily implement something like this in any particular case, by defining
a special-purpose TSM class that has a done method that tests the termination condition. But,
because this structure is generally useful, we can define a general-purpose combinator, called
Until. This combinator also takes a condition and a constituent machine. It simply executes
the constituent machine, and terminates either when the condition becomes true, or when the
constituent machine terminates. As before, the state includes the value of the condition on the
last input and the state of the constituent machine.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
158
Note that this machine will never execute the constituent machine more than once; it will either
run it once to completion (if the condition never becomes true), or terminate it early.
Here are some examples of using RepeatUntil and Until. First, we run the ConsumeFive-
Values machine until the input is greater than 10. Because it only tests the condition when
ConsumeFiveValues is done, and the condition only becomes true on the 11th step, the Con
sumeFiveValues machine is run to completion three times.
def greaterThan10 (x):
return x > 10
>>> m = sm.RepeatUntil(greaterThan10, ConsumeFiveValues())
>>> m.transduce(range(20), verbose = True)
Start state: (0, 0)
In: 0 Out: None Next State: (1, 0)
In: 1 Out: None Next State: (2, 1)
In: 2 Out: None | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
1 Out: None Next State: (2, 1)
In: 2 Out: None Next State: (3, 3)
In: 3 Out: None Next State: (4, 6)
In: 4 Out: 10 Next State: (0, 0)
In: 5 Out: None Next State: (1, 5)
In: 6 Out: None Next State: (2, 11)
In: 7 Out: None Next State: (3, 18)
In: 8 Out: None Next State: (4, 26)
In: 9 Out: 35 Next State: (0, 0)
In: 10 Out: None Next State: (1, 10)
In: 11 Out: None Next State: (2, 21)
In: 12 Out: None Next State: (3, 33)
In: 13 Out: None Next State: (4, 46)
In: 14 Out: 60 Next State: (5, 60)
[None, None, None, None, 10, None, None, None, None, 35, None, None, None, None, 60]
If we do Until on the basic ConsumeFiveValues machine, then it just runs ConsumeFiveValues
until it terminates normally, because the condition never becomes true during this time.
>>> m = sm.Until(greaterThan10, ConsumeFiveValues())
>>> m.transduce(range(20), verbose = True)
Start state: (False, (0, 0))
In: 0 Out: None Next State: (False, (1, 0))
In: 1 Out: | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
0 Out: None Next State: (False, (1, 0))
In: 1 Out: None Next State: (False, (2, 1))
In: 2 Out: None Next State: (False, (3, 3))
In: 3 Out: None Next State: (False, (4, 6))
In: 4 Out: 10 Next State: (False, (5, 10))
[None, None, None, None, 10]
However, if we change the termination condition, the execution will be terminated early. Note
that we can use a lambda expression directly as an argument; sometimes this is actually clearer
than defining a function with def, but it is fine to do it either way.
>>> m = sm.Until(lambda x: x == 2, ConsumeFiveValues())
>>> m.transduce(range(20), verbose = True)
Start state: (False, (0, 0))
In: 0 Out: None Next State: (False, (1, 0))
In: 1 Out: None Next State: (False, (2, 1))
In: 2 Out: None Next State: (True, (3, 3))
[None, None, None]
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
159
If we actually want to keep repeating ConsumeFiveValues() until the condition becomes true,
we can combine Until with Repeat. Now, we see that it executes the constituent machine mul
tiple times, but terminates as soon as the condition is satisfied.
>>> m = sm.Until(greaterThan10, sm.Repeat(ConsumeFiveValues()))
>>> m.transduce(range(20), verbose = True)
Start state: (False, (0, (0, 0)))
In | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
(range(20), verbose = True)
Start state: (False, (0, (0, 0)))
In: 0 Out: None Next State: (False, (0, (1, 0)))
In: 1 Out: None Next State: (False, (0, (2, 1)))
In: 2 Out: None Next State: (False, (0, (3, 3)))
In: 3 Out: None Next State: (False, (0, (4, 6)))
In: 4 Out: 10 Next State: (False, (1, (0, 0)))
In: 5 Out: None Next State: (False, (1, (1, 5)))
In: 6 Out: None Next State: (False, (1, (2, 11)))
In: 7 Out: None Next State: (False, (1, (3, 18)))
In: 8 Out: None Next State: (False, (1, (4, 26)))
In: 9 Out: 35 Next State: (False, (2, (0, 0)))
In: 10 Out: None Next State: (False, (2, (1, 10)))
In: 11 Out: None Next State: (True, (2, (2, 21)))
[None, None, None, None, 10, None, None, None, None, 35, None, None]
4.4 Using a state machine to control the robot
This section gives an overview of how to control the robot with a state machine. For a much more
detailed description, see the Infrastructure Guide, which documents the io and util modules
in detail. The io | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
description, see the Infrastructure Guide, which documents the io and util modules
in detail. The io module provides procedures and methods for interacting with the robot; the
util module provides procedures and methods for doing computations that are generally useful
(manipulating angles, dealing with coordinate frames, etc.)
We can implement a robot controller as a state machine whose inputs are instances of class
io.SensorInput, and whose outputs are instances of class io.Action.
Here is Python code for a brain that is controlled by the most basic of state machines. This machine
always emits the default action, io.Action(), which sets all of the output values to zero. When
the brain is set up, we create a “behavior”, which is a name we will use for a state machine that
transduces a stream of io.SensorInputs to a stream of io.Actions. Finally, we ask the behavior
to start.
Then, all we do in the step method of the robot is:
•
Read the sensors, by calling io.SensorInput() to get an instance that contains sonar and
odometry readings;
• Feed that sensor input to the brain state machine, by calling its step method with that as input;
•
and
Take the io.Action that is generated by the brain as output, and call its execute method,
which causes it to actually send motor commands to the robot.
You can set the verbose flag to True if you want to see a lot of output on each step for debugging.
Inside a Soar brain, we have access to an object robot, which persists during the entire execution
of the brain, and gives us a place to store important objects (like the state machine that will be
doing all the work).
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
160
import sm
import io
class StopSM(sm.SM):
def getNextValues(self, state, inp):
return (None, io.Action())
def setup():
robot.behavior = StopSM()
robot.behavior.start()
def | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
):
return (None, io.Action())
def setup():
robot.behavior = StopSM()
robot.behavior.start()
def step():
robot.behavior.step(io.SensorInput(), verbose = False).execute()
In the following sections we will develop two simple machines for controlling a robot to move a
fixed distance or turn through a fixed angle. Then we will put them together and explore why it
can be useful to have the starting state of a machine depend on the input.
4.4.1 Rotate
Imagine that we want the robot to rotate a fixed angle, say 90 degrees, to the left of where it
is when it starts to run a behavior. We can use the robot’s odometry to measure approximately
where it is, in an arbitrary coordinate frame; but to know how much it has moved since we started,
we have to store some information in the state.
Here is a class that defines a Rotate state machine. It takes, at initialization time, a desired change
in heading.
class RotateTSM (SM):
rotationalGain = 3.0
angleEpsilon = 0.01
startState = ’start’
def __init__(self, headingDelta):
self.headingDelta = headingDelta
When it is time to start this machine, we would like to look at the robot’s current heading (theta),
add the desired change in heading, and store the result in our state as the desired heading. Then,
in order to test whether the behavior is done, we want to see whether the current heading is close
enough to the desired heading. Because the done method does not have access to the input of the
machine (it is a property only of states), we need to include the current theta in the state. So, the
state of the machine is (thetaDesired, thetaLast).
Thus, the getNextValues method looks at the state; if it is the special symbol ’start’, it means
that the machine has not previously had a chance to observe the input and see what its current
heading is, so it computes the desired heading (by adding the desired change to the current head
ing, and then calling a utility procedure | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
its current
heading is, so it computes the desired heading (by adding the desired change to the current head
ing, and then calling a utility procedure to be sure the resulting angle is between plus and minus
π), and returns it and the current heading. Otherwise, we keep the thetaDesired component
of the state, and just get a new value of theta out of the input. We generate an action with a
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
161
rotational velocity that will rotate toward the desired heading with velocity proportional to the
magnitude of the angular error.
def getNextValues(self, state, inp):
currentTheta = inp.odometry.theta
if state == ’start’:
thetaDesired = \
util.fixAnglePlusMinusPi(currentTheta + self.headingDelta)
else:
(thetaDesired, thetaLast) = state
newState = (thetaDesired, currentTheta)
action = io.Action(rvel = self.rotationalGain * \
util.fixAnglePlusMinusPi(thetaDesired - currentTheta))
return (newState, action)
Finally, we have to say which states are done. Clearly, the ’start’ state is not done; but we are
done if the most recent theta from the odometry is within some tolerance, self.angleEpsilon,
of the desired heading.
def done(self, state):
if state == ’start’:
return False
else:
(thetaDesired, thetaLast) = state
return util.nearAngle(thetaDesired, thetaLast, self.angleEpsilon)
Exercise 4.14.
Change this machine so that it rotates through an angle, so you could give
it 2 pi or minus 2 pi to have it rotate all the way around.
4.4.2 Forward
Moving the robot forward a fixed distance is similar. In this case, we remember the robot’s x
and y coordinates when it starts, and drive straight forward until the distance between the initial
position and the current position is close to the desired distance. | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
x
and y coordinates when it starts, and drive straight forward until the distance between the initial
position and the current position is close to the desired distance. The state of the machine is the
robot’s starting position and its current position.
class ForwardTSM (SM):
forwardGain = 1.0
distTargetEpsilon = 0.01
startState = ’start’
def __init__(self, delta):
self.deltaDesired = delta
def getNextValues(self, state, inp):
currentPos = inp.odometry.point()
if state == ’start’:
print "Starting forward", self.deltaDesired
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
162
startPos = currentPos
else:
(startPos, lastPos) = state
newState = (startPos, currentPos)
error = self.deltaDesired - startPos.distance(currentPos)
action = io.Action(fvel = self.forwardGain * error)
return (newState, action)
def done(self, state):
if state == ’start’:
return False
else:
(startPos, lastPos) = state
return util.within(startPos.distance(lastPos),
self.deltaDesired,
self.distTargetEpsilon)
4.4.3 Square Spiral
Imagine we would like to have the robot drive in a square spiral, similar to the one shown in
figure 4.11. One way to approach this problem is to make a “low-level” machine that can consume
a goal point and the sensor input and drive (in the absence of obstacles) to the goal point; and
then make a “high-level” machine that will keep track of where we are in the figure and feed goal
points to the low-level machine.
4.4.3.1 XYDriver
Here is a class that describes a machine that takes as input a series of pairs of goal points (ex
pressed in the robot’s odometry | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
a class that describes a machine that takes as input a series of pairs of goal points (ex
pressed in the robot’s odometry frame) and sensor input structures. It generates as output a
series of actions. This machine is very nearly a pure function machine, which has the following
basic control structure:
•
•
If the robot is headed toward the goal point, move forward.
If it is not headed toward the goal point, rotate toward the goal point.
This decision is made on every step, and results in a robust ability to drive toward a point in
two-dimensional space.
For many uses, this machine does not need any state. But the modularity is nicer, in some cases,
if it has a meaningful done method, which depends only on the state. So, we will let the state
of this machine be whether it is done or not. It needs several constants to govern rotational and
forward speeds, and tolerances for deciding whether it is pointed close enough toward the target
and whether it has arrived close enough to the target.
class XYDriver(SM):
forwardGain = 2.0
rotationGain = 2.0
angleEps = 0.05
distEps = 0.02
startState = False
The getNextValues method embodies the control structure described above.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
163
Figure 4.11 Square spiral path of the robot using the methods in this section.
def getNextValues(self, state, inp):
(goalPoint, sensors) = inp
robotPose = sensors.odometry
robotPoint = robotPose.point()
robotTheta = robotPose.theta
if goalPoint == None:
return (True, io.Action())
headingTheta = robotPoint.angleTo(goalPoint)
if util.nearAngle(robotTheta, headingTheta, self.angleEps):
# Pointing in the right direction, so move forward
r = robotPoint.distance | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
.angleEps):
# Pointing in the right direction, so move forward
r = robotPoint.distance(goalPoint)
if r < self.distEps:
# We’re there
return (True, io.Action())
else:
return (False, io.Action(fvel = r * self.forwardGain))
else:
# Rotate to point toward goal
headingError = util.fixAnglePlusMinusPi(\
-0.7560.904-0.7560.903Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
164
return (False, io.Action(rvel = headingError * self.rotationGain))
headingTheta - robotTheta)
The state of the machine is just a boolean indicating whether we are done.
def done(self, state):
return state
4.4.3.2 Cascade approach
We make a spiral by building a machine that takes SensorInput objects as input and generates
pairs of subgoals and sensorinputs; such a machine can be cascaded with XYDriver to generate
a spiral.
Our implementation of this is a class called SpyroGyra. It takes the increment (amount that each
new side is larger than the previous) at initialization time. Its state consists of three components:
direction: one of ’north’, ’south’, ’east’, or ’west’, indicating which way the robot is traveling
•
length: length in meters of the current line segment being followed
•
subGoal: the point in the robot’s odometry frame that defines the end of the current line
•
segment
It requires a tolerance to decide when the current subgoal point has been reached.
class SpyroGyra(SM):
distEps = 0.02
def __init__(self, incr):
self.incr = incr
self.startState = (’south’, 0, None)
If the robot is close enough to the subgoal point, then it is time to change the state. We increment
the side length, pick | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
If the robot is close enough to the subgoal point, then it is time to change the state. We increment
the side length, pick the next direction (counter clockwise around the cardinal compass direc
tions), and compute the next subgoal point. The output is just the subgoal and the sensor input,
which is what the driver needs.
def getNextValues(self, state, inp):
(direction, length, subGoal) = state
robotPose = inp.odometry
robotPoint = robotPose.point()
if subGoal == None:
subGoal = robotPoint
if robotPoint.isNear(subGoal, self.distEps):
# Time to change state
length = length + self.incr
if direction == ’east’:
direction = ’north’
subGoal.y += length
elif direction == ’north’:
direction = ’west’
subGoal.x -= length
elif direction == ’west’:
direction = ’south’
subGoal.y -= length
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
165
else: # south
direction = ’east’
subGoal.x += length
print ’new:’, direction, length, subGoal
return ((direction, length, subGoal),
(subGoal, inp))
Finally, to make the spiral, we just cascade these two machines together.
def spiroFlow(incr):
return sm.Cascade(SpyroGyra(incr), XYDriver())
Exercise 4.15.
What explains the rounded sides of the path in figure 4.11?
4.5 Conclusion
State machines
State machines are such a general formalism, that a huge class of discrete-time systems can be
described as state machines. The system of defining primitive machines and combinations gives
us one discipline for describing complex systems. It will turn out that there are some systems
that are conveniently defined using this discipline, but | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
systems. It will turn out that there are some systems
that are conveniently defined using this discipline, but that for other kinds of systems, other
disciplines would be more natural. As you encounter complex engineering problems, your job
is to find the PCAP system that is appropriate for them, and if one does not exist already, invent
one.
State machines are such a general class of systems that although it is a useful framework for
implementing systems, we cannot generally analyze the behavior of state machines. That is, we
can’t make much in the way of generic predictions about their future behavior, except by running
them to see what will happen.
In the next module, we will look at a restricted class of state machines, whose state is representable
as a bounded history of their previous states and previous inputs, and whose output is a linear
function of those states and inputs. This is a much smaller class of systems than all state machines,
but it is nonetheless very powerful. The important lesson will be that restricting the form of the
models we are using will allow us to make stronger claims about their behavior.
Knuth on Elevator Controllers
Donald E. Knuth is a computer scientist who is famous for, among other things, his series of
textbooks (as well as for TEX, the typesetting system we use to make all of our handouts), and a
variety of other contributions to theoretical computer science.
“It is perhaps significant to note that although the author had used the elevator system for years
and thought he knew it well, it wasn’t until he attempted to write this section that he realized
there were quite a few facts about the elevator’s system of choosing directions that he did not
know. He went back to experiment with the elevator six separate times, each time believing he
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
166
had finally achieved a complete understanding of its modus operandi. (Now he is reluctant to
ride it for fear some new facet of its operation will appear, contradicting the | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
understanding of its modus operandi. (Now he is reluctant to
ride it for fear some new facet of its operation will appear, contradicting the algorithms given.)
We often fail to realize how little we know about a thing until we attempt to simulate it on a
computer.”
The Art of Computer Programming, Donald E., Knuth, Vol 1. page 295. On the elevator system
in the Mathematics Building at Cal Tech. First published in 1968
4.6 Examples
4.6.1 Practice problem: Things
Consider the following program
def thing(inputList):
output = []
i = 0
for x in range(3):
y = 0
while y < 100 and i < len(inputList):
y = y + inputList[i]
output.append(y)
i = i + 1
return output
A. What is the value of
thing([1, 2, 3, 100, 4, 9, 500, 51, -2, 57, 103, 1, 1, 1, 1, -10, 207, 3, 1])
[1, 3, 6, 106, 4, 13, 513, 51, 49, 106]
It’s important to understand the loop structure of the Python program: It goes through (at
most) three times, and adds up the elements of the input list, generating a partial sum as
output on each step, and terminating the inner loop when the sum becomes greater than 100.
B. Write a single state machine class MySM such that MySM().transduce(inputList) gives the
same result as thing(inputList), if inputList is a list of numbers. Remember to include a
done method, that will cause it to terminate at the same time as thing.
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
167
class MySM(sm | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
Machines
6.01— Spring 2011— April 25, 2011
167
class MySM(sm.SM):
startState = (0,0)
def getNextValues(self, state, inp):
(x, y) = state
y += inp
if y >= 100:
return ((x + 1, 0), y)
return ((x, y), y)
def done(self, state):
(x, y) = state
return x >= 3
The most important step, conceptually, is deciding what the state of the machine will be.
Looking at the original Python program, we can see that we had to keep track of how many
times we had completed the outer loop, and then what the current partial sum was of the
inner loop.
The getNextValues method first increments the partial sum by the input value, and then
checks to see whether it’s time to reset. If so, it increments the ’loop counter’ (x) component
of the state and resets the partial sum to 0. It’s important to remember that the output of the
getNextValues method is a pair, containing the next state and the output.
The done method just checks to see whether we have finished three whole iterations.
C. Recall the definition of sm.Repeat(m, n): Given a terminating state machine m, it returns
a new terminating state machine that will execute the machine m to completion n times, and
then terminate.
Use sm.Repeat and a very simple state machine that you define to create a new state machine
MyNewSM, such that MyNewSM is equivalent to an instance of MySM.
class Sum(sm.SM):
startState = 0
def getNextValues(self, state, inp):
return (state + inp, state + inp)
def done(self, state):
return state > 100
myNewSM = sm.Repeat(Sum(), 3)
4.6.2 Practice problem: Inheritance and State Machines | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
myNewSM = sm.Repeat(Sum(), 3)
4.6.2 Practice problem: Inheritance and State Machines
Recall that we have defined a Python class sm.SM to represent state machines. Here we consider
a special type of state machine, whose states are always integers that start at 0 and increment by
1 on each transition. We can represent this new type of state machines as a Python subclass of
sm.SM called CountingStateMachine.
We wish to use the CountingStateMachine class to define new subclasses that each provide a
single new method getOutput(self, state, inp) which returns just the output for that state
Chapter 4 State Machines
6.01— Spring 2011— April 25, 2011
168
and input; the CountingStateMachine will take care of managing and incrementing the state,
so its subclasses don’t have to worry about it.
Here is an example of a subclass of CountingStateMachine.
class CountMod5(CountingStateMachine):
def getOutput(self, state, inp):
return state % 5
Instances of CountMod5 generate output sequences of the form 0, 1, 2, 3, 4, 0, 1, 2, 3,
4, 0, . . ..
Part a. Define the CountingStateMachine class. Since CountingStateMachine is a subclass
of sm.SM, you will have to provide definitions of the startState instance variable and get-
NextValues method, just as we have done for other state machines. You can assume that every
subclass of CountingStateMachine will provide an appropriate getOutput method.
class CountingStateMachine(sm.SM):
def __init__(self):
self.startState = 0
def getNextState(self, state, inp):
return(state + 1, self.getOutput(state, inp))
Part b. Define a subclass of CountingStateMachine called AlternateZeros. Instances of
AlternateZeros should be state machines for which, on even steps | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
subclass of CountingStateMachine called AlternateZeros. Instances of
AlternateZeros should be state machines for which, on even steps, the output is the same as
the input, and on odd steps, the output is 0. That is, given inputs, i0, i1, i2, i3, . . ., they generate
outputs, i0, 0, i2, 0, . . ..
class AlternateZeros(CountingStateMachine):
def getOutput(self, state, inp):
if not state % 2:
return inp
return 0
MIT OpenCourseWare
http://ocw.mit.edu
6.01SC Introduction to Electrical Engineering and Computer Science
Spring 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/063daea1b8a3573d2aff0f0b96d390da_MIT6_01SCS11_chap04.pdf |
Method of Green’s Functions
18.303 Linear Partial Differential Equations
Matthew J. Hancock
Fall 2006
We introduce another powerful method of solving PDEs. First, we need to consider
some preliminary definitions and ideas.
1 Preliminary ideas and motivation
1.1 The delta function
Ref: Guenther & Lee
10.5, Myint-U & Debnath
10.1
§
§
Definition [Delta Function] The δ-function is defined by the following three
properties,
0,
,
∞
(
x
= 0,
x = 0,
δ (x) =
∞
δ (x) dx = 1
−∞
Z
∞
f (x) δ (x
−
a) dx = f (a)
−∞
Z
where f is continuous at x = a. The last is called the sifting property of the δ-function.
To make proofs with the δ-function more rigorous, we consider a δ-sequence, that
is, a sequence of functions that converge to the δ-function, at least in a pointwise
sense. Consider the sequence
δn (x) =
n −(nx)
2
e
√π
Note that
∞
−∞
Z
δn (x) dx =
2n
√
π
∞
0
Z
e −(nx)
2
dx =
2
√
π
∞
2−z
e
0
Z
1
dz = erf (
) = 1
∞
�
Definition [2D Delta Function] The 2D δ-function is defined by the following
three properties,
δ (x, y) =
0,
,
∞
(
(x, y) = 0,
(x, | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
three properties,
δ (x, y) =
0,
,
∞
(
(x, y) = 0,
(x, y) = 0,
δ (x, y) dA = 1,
Z
Z
f (x, y) δ (x
Z
Z
a, y
−
−
b) dA = f (a, b) .
1.2 Green’s identities
Ref: Guenther & Lee
8.3
§
Recall that we derived the identity
(G
∇ ·
F + F
· ∇
Z
D
Z
G) dA =
(GF)
C
Z
nˆdS
·
(1)
for any scalar function G and vector valued function F. Setting F =
is called Green’s First Identity,
∇
u gives what
2 u +
G
∇
u
∇
· ∇
G
dA =
G (
∇
u
·
nˆ) dS
C
Z
Z
D
Z
(cid:0)
Interchanging G and u and subtracting gives Green’s Second Identity,
2G
u
∇
−
G
∇
2 u dA =
(u
G
∇
−
G
∇
u) nˆdS.
·
Z
D
Z
(cid:1)
2 Solution of Laplace and Poisson equation
(cid:0)
(cid:1)
C
Z
Ref: Guenther & Lee,
Consider the BVP
5.3,
§
§
8.3, Myint-U & Debnath
10.2 – 10.4
§
∇
2
u = F
in D,
u = f
on C.
(2)
(3)
(4)
Let (x, y) be a fixed arbitrary point in a 2D domain D and let | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
(x, y) be a fixed arbitrary point in a 2D domain D and let (ξ, η) be a variable
point used for integration. Let r be the distance from (x, y) to (ξ, η),
Considering the Green’s identities above motivates us to write
r = (ξ
q
x)2 + (η
y)2 .
−
−
2G = δ (ξ
∇
−
−
G = 0 on C.
x, η
y) = δ (r)
in D,
(5)
2
�
The notation δ (r) is short for δ (ξ
second identity (3) gives
−
x, η
−
y). Substituting (4) and (5) into Green’s
u (x, y)
−
Z
D
Z
GF dA =
f
G nˆdS
·
∇
C
Z
Rearranging gives
u (x, y) =
GF dA +
Z
D
Z
Z
f
G nˆdS
·
∇
C
(6)
Therefore, if we can find a G that satisfies (5), we can use (6) to find the solution
u (x, y) of the BVP (4). The advantage is that finding the Green’s function G depends
only on the area D and curve C, not on F and f .
Note: this method can be generalized to 3D domains.
2.1 Finding the Green’s function
To find the Green’s function for a 2D domain D, we first find the simplest function | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
’s function for a 2D domain D, we first find the simplest function
2v = δ (r). Suppose that v (x, y) is axis-symmetric, that is, v = v (r).
that satisfies
Then
∇
2
v = vrr +
∇
1
r
vr = δ (r)
For r > 0,
Integrating gives
vrr +
1
r
vr = 0
v = A ln r + B
For simplicity, we set B = 0. To find A, we integrate over a disc of radius ε centered
at (x, y), Dε,
From the Divergence Theorem, we have
Dε
Z
Z
1 =
δ (r) dA =
2vdA
∇
Z
Dε
Z
∇
Z
Dε
Z
2vdA =
ndS
v
∇
·
Cε
Z
where Cε is the boundary of Dε, i.e. a circle of circumference 2πε. Combining the
previous two equations gives
Hence
1 =
Cε
Z
ndS =
v
∇
·
∂v
∂r
Cε
Z
v (r) =
dS =
A
ε
Cε
Z
dS = 2πA
r=ε
(cid:12)
(cid:12)
(cid:12)
(cid:12)
1
2π
ln r
3
This is called the fundamental solution for the Green’s function of the Laplacian on
2D domains. For 3D domains, the fundamental solution for the Green’s function of
the Laplacian is | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
For 3D domains, the fundamental solution for the Green’s function of
the Laplacian is
1/(4πr), where r = (x
ξ)2 + (y
η)2 + (z
ζ)2 .
−
The Green’s function for the Laplacian on 2D domains is defined in terms of the
−
−
−
q
corresponding fundamental solution,
G (x, y; ξ, η) =
ln r + h,
1
2π
h is regular,
∇
2h = 0,
(ξ, η)
G = 0
(ξ, η)
D,
C.
∈
∈
The term “regular” means that h is twice continuously differentiable in (ξ, η) on D.
Finding the Green’s function G is reduced to finding a C 2 function h on D that
satisfies
2h = 0
∇
h =
(ξ, η)
1
−2π
ln r
D,
∈
(ξ, η)
C.
∈
The definition of G in terms of h gives the BVP (5) for G. Thus, for 2D regions D,
finding the Green’s function for the Laplacian reduces to finding h.
2.2 Examples
Ref: Myint-U & Debnath
10.6
§
(i) Full plane D = R2 . There are no boundaries so h = 0 will do, and
1
4π
x) + (η
ln r =
1
2π
ln (ξ
G =
2
y)
2
−
−
(ii | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
ln r =
1
2π
ln (ξ
G =
2
y)
2
−
−
(ii) Half plane D =
(cid:3)
. We find G by introducing what is called an
(x, y) : y > 0
}
{
y) corresponding to (x, y). Let r be the distance from (ξ, η) to
(cid:2)
“image point” (x,
(x, y) and r ′ the distance from (ξ, η) to the image point (x,
−
y),
−
We add
r = (ξ
q
−
x)2 + (η
y)2 ,
−
r ′ = (ξ
q
−
x)2 + (η + y)2
h =
1
−2π
′
ln r =
1
−2π
ln
(ξ
q
2
2
x) + (η + y)
−
to G to make G = 0 on the boundary. Since the image point (x,
then h is regular for all points (ξ, η)
D, and satisfies Laplace’s equation,
−
y) is NOT in D,
∈
2h
=
∇
∂2h
∂ξ2 +
∂2h
∂η2
= 0
4
)
η
,
ξ
;
2
/
1
2
/
1
2
,
2
(
G
0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.6
−0.7
0
2
4
6
η
8
10
5
−5
0
ξ
Figure 1: Plot of the Green’s function G (x, y; ξ, η) for the Laplacian operator in the
upper half plane, for | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
G (x, y; ξ, η) for the Laplacian operator in the
upper half plane, for (x, y) = (√2, √2).
for (ξ, η)
∈
G =
1
2π
D. Writing things out fully, we have
ln r + h =
1
2π
ln r
1
− 2π
′
ln r =
1
2π
ln
r
r ′ =
1
4π
ln
(ξ
(ξ
−
−
2
2
x) + (η
y)
x)2 + (η + y)2
−
(7)
→ −∞
as (ξ, η)
√2, √2 .
G (x, y; ξ, η) is plotted in the upper half plane in Figure 1 for (x, y) =
Note that G
(x, y). Also, notice that G < 0 everywhere and
(cid:1)
G = 0 on the boundary η = 0. These are, in fact, general properties of the Green’s
function. The Green’s function G (x, y; ξ, η) acts like a weighting function for (x, y)
and neighboring points in the plane. The solution u at (x, y) involves integrals of
the weighting G (x, y; ξ, η) times the boundary condition f (ξ, η) and forcing function
F (ξ, η).
→
(cid:0)
On the boundary C, η = 0, so that G = 0 and
G
·
∇
n =
∂G
− ∂η
=
1
π (ξ
−
y
x)2 + y2
η=0
(cid:12) | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
∂η
=
1
π (ξ
−
y
x)2 + y2
η=0
(cid:12)
(cid:12)
(cid:12)
(cid:12)
The solution of the BVP (6) with F = 0 on the upper half plane D can now be
written as, from (6),
u (x, y) =
G
f
∇
·
nˆdS =
C
Z
∞
y
π −∞ (ξ
Z
f (ξ)
x)2 + y
−
dξ,
2
which is the same as we found from the Fourier Transform, on page 13 of fourtran.pdf.
5
(iii) Upper right quarter plane D =
points (x,
y), (
x, y) and (
x,
y),
−
−
−
(x, y) : x > 0, y > 0
. We use the image
}
{
G =
ln
(ξ
−
1
2π
q
ln
1
−2π
q
2
x) + (η
−
−
y)
2
(ξ + x) + (η
ln
2
1
− 2π
1
2
y) +
2π
−
2
2
x) + (η + y)
(8)
(ξ
−
(ξ + x) + (η + y)
2
2
q
ln
q
∈
C = ∂D (the boundary), either ξ = 0 or η = 0, and in either case, G = 0.
For (ξ, η)
Thus G = 0 on the boundary of D. Also, the second, third and fourth terms on the
2 = ∂2/∂ξ2+∂2/∂η2 of | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
terms on the
2 = ∂2/∂ξ2+∂2/∂η2 of each
r.h.s. are regular for (ξ, η)
D, and hence the Laplacian
2G = δ (r).
of these terms is zero. The Laplacian of the first term is δ (r). Hence
Thus (8) is the Green’s function in the upper half plane D.
∇
∇
∈
For (ξ, η)
∈
C = ∂D (the boundary),
0
G
f
∇
·
nˆdS =
C
Z
f (0, η)
Z
∞
∞
=
f (0, η)
0
Z
Note that
∂G
− ∂ξ
(cid:12)
(cid:12)
(cid:12)
(cid:12)
ξ=0
∂G
∂ξ
(cid:12)
(cid:12)
(cid:12)
(cid:12)
∞
dη +
f (ξ, 0)
ξ=0
!
0
Z
∞
dη
−
0
Z
f (ξ, 0)
∂G
− ∂η
dξ
η=0
!
(cid:12)
(cid:12)
(cid:12)
(cid:12)
dξ
η=0
∂G
∂η
(cid:12)
(cid:12)
(cid:12)
(cid:12)
2
η)
−
4yxη
2
−π x2 + (y + η)
x2 + (y
4yxξ
(cid:1) (cid:0)
2
ξ) + y2
(cid:1)
2
(x + ξ) + y2
=
=
∂G
∂ξ
ξ=0
(cid:12)
(cid:12)
∂G
(cid:12)
(cid:12)
∂η η=0 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
=0
(cid:12)
(cid:12)
∂G
(cid:12)
(cid:12)
∂η η=0
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:0)
−π (x
−
(cid:0)
The solution of the BVP (6) with F = 0 on the upper right quarter plane D and
boundary condition u = f can now be written as, from (6),
(cid:1)
(cid:0)
(cid:1)
u (x, y) =
=
f
Z
∇
C
4yx
G nˆdS
·
∞
ηf (0, η)
x2 + (y + η)2
x2 + (y
(cid:0)
(x
ξf (ξ, 0)
(cid:1) (cid:0)
(cid:1)
dξ
(x + ξ)2 + y2
ξ)2 + y2
dη
η)2
−
Z
0
∞
− π
+
4yx
π
0
Z
(x, y) : x2 + y2
(cid:0)
−
(iv) Unit disc D =
. By some simple geometry, for each
1
}
D, choosing the image point (x ′ , y ′ ) along the same ray as (x, y) and a
point (x, y)
distance 1/ x2 + y2 away from the origin guarantees that r/r′ is constant along the
circumference of the circle, where
≤
∈
{
(cid:1) (cid:0)
(cid:1)
p
r = (ξ
q
−
x)2 + (η
y)2 ,
−
r ′ = (ξ
q
−
x ′ )2 + (η
y ′)2 .
−
6 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
x ′ )2 + (η
y ′)2 .
−
6
[DRAW] Using the law of cosines, we obtain
r 2 = ρ˜2 + ρ2
r ′2 = ρ˜2 +
−
1
ρ2 −
2ρρ˜cos θ˜
(cid:16)
cos θ˜
(cid:16)
ρ˜
ρ
2
−
−
θ
θ
(cid:17)
(cid:17)
ξ2 + η2 and θ, θ˜ are the angles the rays (x, y) and (ξ, η)
where ρ =
make with the horizontal. Note that for (ξ, η) on the circumference (ξ2 +η2 = ρ˜2 = 1),
we have
x2 + y2, ˜ρ =
p
p
1 + ρ2
−
r2
r ′2
=
2ρ cos θ˜
(cid:16)
θ
−
(cid:17)
θ
1
1 + ρ
2 −
Thus the Green’s function for the Laplacian on the 2D disc is
1
2 ρ
cos
θ˜
(cid:16)
−
(cid:17)
= ρ2 ,
ρ˜ = 1.
Thus, the solution to the BVP (5) on the unit circle is (in polar coordinates),
G (ξ, η; x, y) =
1
2π
ln
r
r ′ρ
=
1
4π
ln
ρ˜2 + ρ2
−
ρ2ρ˜2 + 1
2ρρ˜cos θ˜
(cid:16)
2ρρ˜cos θ˜
(cid:16)
θ
−
−
(cid:17)
θ
(cid:17 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
)
2ρρ˜cos θ˜
(cid:16)
θ
−
−
(cid:17)
θ
(cid:17)
−
Note that
G
·
∇
nˆ =
∂G
∂ρ˜
u (ρ, θ) =
1
2π
2π
0
Z
1 + ρ2
−
+
1
4π
2π
1
ln
0
Z
0
Z
=
1
2π 1 + ρ2
ρ˜=1
−
ρ2
1
−
2ρ cos θ˜
(cid:16)
θ
−
(cid:17)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
ρ2
1
−
2ρ cos θ˜
(cid:16)
ρ˜2 + ρ2
−
−
ρ2ρ˜2 + 1
˜
f θ
d˜
θ
θ
(cid:16)
(cid:17)
(cid:17)
2ρρ˜cos
θ˜
θ
−
−
(cid:16)
2ρρ˜cos θ˜
(cid:16)
−
(cid:17)
θ
(cid:17)
ρd˜ ρdθ ˜
F
ρ,˜ θ˜
(cid:17)
(cid:16)
The solution to Laplace’s equation is found be setting F = 0,
u (ρ, θ) =
1
2π
2π
0
Z
1 + ρ2
−
ρ2
1
−
2ρ cos θ˜
(cid:16)
θ
−
(cid:17)
f θ dθ˜
˜
(cid:17)
(cid:16)
This is called the Poisson integral formula for the unit disk.
2.3 Conformal mapping and the Green’s function
Conformal mapping allows us to extend the | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
3 Conformal mapping and the Green’s function
Conformal mapping allows us to extend the number of 2D regions for which Green’s
2u can be found. We use complex notation, and let
functions of the Laplacian
α = x + iy be a fixed point in D and let z = ξ + iη be a variable point in D (what
we’re integrating over). If D is simply connected (a definition from complex analysis),
∇
7
then by the Riemann Mapping Theorem, there is a conformal map w (z) (analytic
and one-to-one) from D into the unit disk, which maps α to the origin, w (α) = 0
and the boundary of D to the unit circle,
< 1
for z
D/∂D. The Greens function G is then given by
w (z)
|
|
w (z)
|
∂D and 0
= 1 for z
≤ |
∈
∈
G =
1
2π
ln
w (z)
|
|
∂D,
To see this, we need a few results from complex analysis. First, note that for z
w (z) = 0 so that G = 0. Also, since w (z) is 1-1, w (z) > 0 for z = α. Thus, we
|
α)n H (z) where H (z) is analytic and nonzero in D. Since
can write w (z) = (z
w (z) is 1-1, w | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
D. Since
can write w (z) = (z
w (z) is 1-1, w ′ (z) > 0 on D. Thus n = 1. Hence
−
∈
|
|
|
|
|
and
where
r =
h =
w (z) = (z
α) H (z)
−
G =
1
2π
ln r + h
x)2 + (η
y)2
−
−
z
|
1
2π
−
ln
α
= (ξ
|
q
H (z)
|
|
Since H (z) is analytic and nonzero in D, then (1/2π) ln H (z) is analytic in D and
2h = 0 in D.
hence its real part is harmonic, i.e. h =
Thus by our definition above, G is the Green’s function of the Laplacian on D.
((1/2π) ln H (z)) satisfies
∇
ℜ
Example 1. The half plane D =
(x, y) : y > 0
. The analytic function
}
{
w (z) =
α
α∗
z
z
−
−
maps the upper half plane D onto the unit disc, where asterisks denote the complex
conjugate. Note that w (α) = 0 and along the boundary of D, z = x, which is
equidistant from α and α∗, so that w (z) = 1. Points in the upper half plane (y > 0)
|
|
are closer to α = x + iy, also in the upper half plane, than to α | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
to α = x + iy, also in the upper half plane, than to α∗ = x
iy, in the
lower half plane. Thus for z
< 1. The Green’s
D/∂D,
function is
w (z)
|
z
|
α∗
−
−
−
=
∈
α
/
z
|
|
|
|
G =
1
2π
ln
w (z)
|
|
=
1
2π
ln
z
|
z
|
α
∗|
α
|
−
−
=
ln
1
2π
r
r ′
which is the same as we derived before, Eq. (7).
8
�
3 Solution to other equations by Green’s function
Ref: Myint-U & Debnath
10.5
§
The method of Green’s functions can be used to solve other equations, in 2D and
3D. For instance, for a 2D region D, the problem
∇
2
u + u = F
in D,
u = f
on ∂D,
has the fundamental solution
1
4
where Y0 (r) is the Bessel function of order zero of the second kind. The problem
Y0 (r)
2 u
∇
−
u = F
in D,
u = f
on ∂D,
has fundamental solution
1
−2π
where K0 (r) is the modified Bessel function of order zero of the second kind.
K0 (r)
The Green’s function method can also be used to solve time-dependent problems,
such as the Wave Equation and the Heat Equation.
9 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
problems,
such as the Wave Equation and the Heat Equation.
9 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/064b7e5d9e3296ab2219e44c53f7a1b5_greensfn.pdf |
Bifurcations: baby normal forms.
Rodolfo R. Rosales, Department of Mathematics,
Massachusetts Inst. of Technology, Cambridge, Massachusetts, MA 02139
October 10, 2004
Abstract
The normal forms for the various bifurcations that can occur in a one dimensional dynamical
system (x_ = f (x; r)) are derived via local approximations to the governing equation, valid near
the critical values where the bifurcation occurs. The derivations are non-rigorous.
Contents
1 Introduction.
Necessary condition for a bifurcation.
. . . . . . . . . . . . . . . . . . . . . . . . . .
2 Saddle Node bifurcations.
General remarks on structural stability. . . . . . . . . . . . . . . . . . . . . . . . . .
Saddle node bifurcations are structurally stable.
. . . . . . . . . . . . . . . . . . . .
Structural stability and allowed perturbations. . . . . . . . . . . . . . . . . . . . . .
Normal form for a Saddle Node bifurcation.
. . . . . . . . . . . . . . . . . . . . . .
Remark on the variable scalings near a bifurcation.
. . . . . . . . . . . . . . . . . .
Theorem: reduction to normal form.
. . . . . . . . . . . . . . . . . . . . . . . . . .
Problem 1: formal expansion to reduce to normal form. . . . . . . . . . . . . . . .
3 Transcritical bifurcations.
Normal form for a transcritical bifurcation. . . . . . . . . . . . . . . . . . . . . . . .
Theorem: reduction to normal form.
. . . . . . . . . . . . . . . . . . . . . . . . . .
Structural stability for transcritical bifurcations. . . . . . . . . . . . . . . . . . . . .
Problem 2: formal expansion to reduce to normal form.
. . . . . . . . . . . . . . .
Problem | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
. . . .
Problem 2: formal expansion to reduce to normal form.
. . . . . . . . . . . . . . .
Problem 3: What about problem 3.2.6 in the book by Strogatz? . . . . . . . . . .
3
3
4
4
4
5
6
6
6
7
8
8
10
11
11
11
1
Rosales Bifurcations: baby normal forms.
4 Pitchfork bifurcations.
Introduction of the re(cid:13)ection symmetry.
. . . . . . . . . . . . . . . . . . . . . . . .
Simplest symmetry: f (x; r) is an odd function of x. . . . . . . . . . . . . . . . . . .
Problem 4: normal form for a pitchfork bifurcation.
. . . . . . . . . . . . . . . . .
Problem 5: formal expansion to reduce to normal form.
. . . . . . . . . . . . . . .
Problem 6: proof of reduction to normal form.
. . . . . . . . . . . . . . . . . . . .
5 Problem Answers.
2
12
12
12
13
13
13
14
Rosales Bifurcations: baby normal forms.
1 Introduction.
Consider the simple one-dimensional dynamical system
dx
dt
= f (x; r);
3
(1.1)
where we will assume that f = f (x; r) is a smooth function, and r is a parameter. We wish to study
the possible bifurcations for this system, as the parameter r varies. Because the phase portrait for
a 1-D system is fully determined by its critical (equilibrium) points, we need only study what happens
to the critical points. Bifurcations will (only) occur as these points are created, destroyed, collide, or
change stability. For higher dimensional systems, the critical points alone do not determine the phase
portrait. However, the bifurcations we study here can still occur, and are important. Furthermore,
the normal forms we develop here still apply. Thus:
Consider some critical point x = x0 (occurring for a value r = r0) i | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
,
the normal forms we develop here still apply. Thus:
Consider some critical point x = x0 (occurring for a value r = r0) i.e.
f (x0; r0) = 0: Then ask:
When is (x0; r0) = 0 a bifurcation point? A necessary condition is:
fx(x0; r0) = 0:
(1.2)
Why? Because otherwise the implicit function theorem would tells us that: In a neighborhood
of (x0; r0), the critical point equation f (x; r) = 0 has a unique (smooth) solution x = X(r), which
satis(cid:12)es X(r0) = x0. Thus no critical points would be created, destroyed or collide at (x0; r0).
Further, obviously: no change of stability can occur if fx(x0; r0) = 0.
Without loss of generality, in what follows we will assume that x0 = r0 = 0.
Remark 1 From equation (1.2) we see that bifurcations and undecided (linearized) stability are
intimately linked. This is true not just for 1-D systems, and (in fact) applies even for bifurcations
that do not involve critical points.
Remark 2 For higher dimensional systems (where x and f are vectors of the same dimension),
fx is a square matrix, and the condition (1.2) gets replaced by fx is singular. The proof of this
is essentially the same as above, via the implicit function theorem (even in in(cid:12)nite dimensions, as
long as an appropriate version of the implicit function theorem applies,1 the result is true). We
1In in(cid:12)nite dimensions the implicit function theorem may not apply.
6
Rosales Bifurcations: baby normal forms.
4
point out that, as long as fx has a one-dimensional kernel (zero is a multiplicity one eigenvalue),
most of what follows next applies for higher dimensional systems as well.
2 Saddle Node bifurcations.
Given a critical point, say (x; r) = (0; 0), with f (0; 0) = fx(0; 0) = 0, the most generic situation is
that where fr(0; 0) = 0 and fxx(0; 0 | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
0) = 0, the most generic situation is
that where fr(0; 0) = 0 and fxx(0; 0) = 0. By appropriately re-scaling x and r in (1.1) | if needed,
we can thus assume that:
f (0; 0) = fx(0; 0) = 0;
fr(0; 0) = 1;
and fxx(0; 0) =
2:
(cid:0)
(2.3)
Remark 3 For arbitrary dynamical systems (such as (1.1)), we have to be careful about assuming
that anything is exactly zero. Situations where something vanishes exactly are (generally)
structurally unstable, since arbitrarily small perturbations will destroy them. To (safely) make
such assumptions, we need extra information about the system: information that restricts the possible
perturbations | in such a way that whatever vanishes, remains zero when the system is perturbed.
Remark 4 In view of the prior remark, the reader may very well wonder: How do we justify
the assumptions above, namely: f (0; 0) = fx(0; 0) = 0? The answer is that: It is the full
set of assumptions in (2.3) that is structurally stable, not just the (cid:12)rst two. We prove
this next. Since (2.3) characterizes the saddle node bifurcations (we show this later), this will prove
that: Saddle Node bifurcations are structurally stable.
Proof: First, to show that the (cid:12)rst two assumptions (when alone) are structurally unstable, consider
the example: f = r2 + x2, with the critical point (0; 0). Then change f to f = r2 + x2 + 10(cid:0)30,
which causes the critical point to cease to exist. This example illustrates the fact that:
Isolated critical points2 are structurally unstable, thus not (generally) interesting.
Second: imagine now that f depends on some extra parameter f = f (x; r; h), such that the assump-
tions in (1.2) apply for (x; r; h) = (0; 0; 0) | here h small and nonzero produces an \arbitrary | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
2) apply for (x; r; h) = (0; 0; 0) | here h small and nonzero produces an \arbitrary"
(smooth) perturbation to the dynamical system in (1.1). Consider now the system of equations:
f (x; r; h) = 0;
and
fx(x; r; h) = 0:
(2.4)
2These are points such that there is a neighborhood in (x; r) space where there is no other critical point.
6
6
Rosales Bifurcations: baby normal forms.
Now (0; 0; 0) is a solution to this system, and the Jacobian matrix
J = 0
B
@
f
x(0; 0; 0)
fr(0; 0; 0)
1
fxx(0; 0; 0) fxr(0; 0; 0) C
0
= 0
1
2 fxr(0; 0; 0)
A @
B (cid:0)
5
(2.5)
1
C
A
is non-singular there. Thus the implicit function theorem guarantees that there is a (unique) smooth
curve of solutions x = x(h) and r = r(h) to (2.4), with x(0) = 0 and r(0) = 0. Along this curve, for
h small enough, it is clear that: fr(x; r; h) = 0 and fxx(x; r; h) = 0. Thus, modulo normalization,
(2.3) is valid along the curve | for h small enough. This (cid:12)nishes the proof.
Remark 5 In the proof of structural stability in the prior remark, we assumed that the pertur-
bations to the dynamical system in (1.1) had the form
dx
dt
= f (x; r; h);
(2.6)
with the dependence in the \extra" parameter h being smooth. This sounds reasonable, but (clearly)
it does not cover all possible (imaginable or non-imaginable) perturbations. For example, we could
consider \perturbations" of the form
dx
dt
= f (x; r) + h
d2x
dt2 :
(2 | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
\perturbations" of the form
dx
dt
= f (x; r) + h
d2x
dt2 :
(2.7)
What \small" means in this case is not easy to state (and we will not even try here). However, this
example should make it clear that: when talking about structural stability, for the concept to
even make sense, the dynamical system must be thought as belonging to some \class"
| within which the idea of \close" makes sense. Further, the answer to the question: is this
system structurally stable? will be a function of the class considered.
Let us now get back to the system in (1.1), with the assumptions in (2.3), and let us
study the bifurcation that occurs in this case: the Saddle Node bifurcation.
We proceed formally (cid:12)rst, by expanding in Taylor series and writing the equation in the form
dx
dt
= r
(cid:0)
x2 + O(r2; rx; x3);
(2.8)
where all the information in (2.3) has been used. We now look at this equation in a small (rectan-
gular) neighborhood of the origin, characterized by
< (cid:15)
and
x
j
j
< (cid:15)2;
r
j
j
(2.9)
6
6
Rosales Bifurcations: baby normal forms.
6
where 0 < (cid:15)
(cid:28)
1. Then the (cid:12)rst two terms on the right in (2.8) are O((cid:15)2), while the rest is O((cid:15)3). We
thus argue that the behavior of the system in the neighborhood given by (2.9) is well approximated
by the equation
dx
dt
x2:
= r
(cid:0)
(2.10)
This is the Normal form for a Saddle Node bifurcation | see Strogatz book for
a description of its behavior.
Remark 6 A natural question here is: Why the scaling in (2.9)? Such a question can
only be answered \after the fact", with the answer being (basically) \because it works". Namely,
after we have (cid:12)gured out what is going | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
fact", with the answer being (basically) \because it works". Namely,
after we have (cid:12)gured out what is going on, we can explain why the scaling in (2.9) is the right one
to do. As follows: at a Saddle Node bifurcation | say, at (x; r) = (0; 0) | a branch of critical
point solutions | say x = X1(r) | turns \back" on itself.3 Thus, on one side of the value r = 0,
no critical point exist, while on the other side two are found, say at: x = X1(r) and x = X2(r).
Locally, these two curves can be joined into a single one by writing r = R(x). Then r = R(x) has
either a maximum (or a minimum) at x = 0. Hence it can, locally, be approximated by a parabola.
Hence the scaling in (2.9) is the right one. Any other scaling would miss the fact that we have a
branch of critical points turning around.
Those with a mathematical mind will probably not be very satis(cid:12)ed with this explanation. For them,
the theorem below might do the trick. However, note that this theorem is just a proof that equation
(2.10) is the right answer, showing that (2.9) works. It does not give any reason (or method) that
would justify (2.9) \a priori". Simply put: advance in science and mathematics requires places at
which \insight" is needed, and (2.9) is an example of this; perhaps a very simple example, but one
nonetheless.
Theorem 1 With the hypothesis in equation (2.9), there exists a neighborhood of the origin, and
there a smooth coordinate transformation (x; t)
(X; T ) of the form
!
X = x (cid:8)(x)
and
dT
dt
= (cid:9)(x; r);
(2.11)
such that (1.1) is transformed into
dX
dT
= r
(cid:0)
X 2 | that is, the normal form in equation (2.10).
Furthermore: (cid:8)(0) = 1 and (cid:9)(0; 0) = 1 | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
equation (2.10).
Furthermore: (cid:8)(0) = 1 and (cid:9)(0; 0) = 1 | thus: X
x and T
(cid:25)
(cid:25)
t close to the origin.
3Note that this is the reason that this type of bifurcation is also known by the name of turning point bifurcation.
Rosales Bifurcations: baby normal forms.
7
IMPORTANT: the de(cid:12)nition for the transformed time is meant to be done along the solutions.
That is: in the equation
dT
dt
= (cid:9)(x; r), x = x(t) is a solution of equation (1.1).
Proof: Using the implicit function theorem, we see that f (x; r) = 0 has a unique (also smooth)
solution r = R(x) in a neighborhood of the origin: f (x; R(x))
0; which satis(cid:12)es R(0) = 0. It
(cid:17)
is easy to see that (dR=dx)(0) = 0 and (d2R=dx2)(0) = 2 also apply. Thus R = x2 (cid:8)(x)2; where
(cid:8) is smooth and (cid:8)(0) = 1 | this is the (cid:8) which appears in equation (2.11).
Because f (x; R(x))
(cid:17)
0, we can write f (x; r) = (cid:11)(x; r) (r
R(x)) = (cid:11)(x; r) (r
(cid:0)
(cid:0)
X 2); where (cid:11) is
smooth and does not vanish near the origin | in fact: (cid:11)(0; 0) = 1:
De(cid:12)ne now (cid:9) = ((cid:8) + x(cid:8)
0
) (cid:11); where the prime indicates di(cid:11)erentiation with respect to x. It is
then easy to check that (cid:9)(0; 0) = 1, and that with this de(cid:12)nition (2.11) yields
dX
dT
X 2.
= r
(cid:0) | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
that with this de(cid:12)nition (2.11) yields
dX
dT
X 2.
= r
(cid:0)
QED.
Problem 1 Implement a reduction to normal form, along lines similar to those used in theo-
rem 1, by formally expanding the coordinate transformation up to O((cid:15)2) | where (cid:15) is as in equation
(2.9). To do so, write the dynamical system in the expanded form
dx
dt
= r
x2 + a r x + b x3 + O((cid:15)4);
(cid:0)
(cid:15)3)
(cid:15)2)
O(
O(
{z
}
{z
|
}
|
where a and b are constants. Then expand the transformation
x = X + (cid:11) X 2 + O((cid:15)3);
dt
dT
= 1 + (cid:12) X + O((cid:15)2);
and (cid:12)nd what values the coe(cid:14)cients (cid:11) and (cid:12) must take so that
dX
dT
= r
(cid:0)
X 2 + O((cid:15)4):
(2.12)
(2.13)
(2.14)
(2.15)
This process can be continued so as to make the error term in (2.15) as high an order in (cid:15) as
desired | provided that f in (1.1) has enough derivatives. We point out here that: theorem 1
requires f to have only second order continuous derivatives to apply. By
contrast, the process here requires progressively higher derivatives to exist | it, however, has the
advantage of giving explicit formulas for the transformation.
Rosales Bifurcations: baby normal forms.
8
3 Transcritical bifurcations.
We now go back to the considerations in the introduction (section 1), and add one extra hypothesis
at the bifurcation point (x0; r0) = (0; 0). Namely: we assume that there is a smooth
branch x = (cid:31)(r) of critical points that goes through the bifurcation point.
Taking successive derivatives of the identity f ((cid:31)(r); r)
0, and evaluating them at | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
points that goes through the bifurcation point.
Taking successive derivatives of the identity f ((cid:31)(r); r)
0, and evaluating them at r = 0 (where
(cid:17)
(cid:31) = f = fx = 0), we obtain:
f
r(0; 0) = 0 and frr(0; 0) =
2
(cid:22) fxx(0; 0)
(cid:0)
(cid:0)
2 (cid:22) fxr(0; 0);
(3.16)
(0). As before, we assume that the coe(cid:14)cients for which we have no information are
where (cid:22) =
d(cid:31)
dr
non-zero, and normalize (by scaling r and x in equation (1.1), if needed) so that: fxx(0; 0) =
2
(cid:0)
and frx(0; 0) = 1. Thus, at (x; r) = (0; 0), we have:
f = fx = fr = 0;
fxx =
2;
(cid:0)
fxr = 1;
and frr = 2 a;
(3.17)
where a is a constant. In fact, (3.16) and (3.17), show that a = (cid:22)2
(cid:22) = ((cid:22)
(cid:0)
1=2)2
(cid:0)
(cid:0)
1=4. Thus
0. We do not know what the exact value of (cid:22) is, however, as usual (for
1 + 4 a = (2 (cid:22)
1)2
(cid:0)
(cid:21)
generality) we exclude the equal sign in this last inequality as \too special". Thus:
Assume 1 + 4 a > 0:
(3.18)
As in the case of the Saddle Node bifurcation, the next step is to use (3.17) to expand the equation
(1.1). This yields:
dx
dt
= r x
(cid:0)
x2 + a r2 + O(x3; r x2; r2 x; r3):
(3.19)
We now assume4 that both r and x are small, of size O((cid: | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
2; r2 x; r3):
(3.19)
We now assume4 that both r and x are small, of size O((cid:15)), where 0 < (cid:15)
keeping up to terms of O((cid:15)2) on the right (leading order) in (3.19), we obtain the equation:
(cid:28)
1. Then,
dx
dt
= r x
(cid:0)
x2 + a r2 = (x
(cid:0) (cid:0)
(cid:27) r) (x
1
(cid:27) r);
2
(cid:0)
where (cid:27)1 =
1
2
1 + p1 + 4 a
(cid:16)
(cid:17)
and (cid:27)2 =
1
2
1 p
(cid:0)
(cid:16)
1 + 4 a
(cid:17)
and R = p1 + 4 a r, this last equation takes the form:
. In terms of the variables X = x
dX
dt
= R X
X 2;
(cid:0)
which is the Normal form for a Transcritical bifurcation.
4Compare this with (2.9).
(3.20)
(cid:27)2 r
(cid:0)
(3.21)
Rosales Bifurcations: baby normal forms.
9
Remark 7 The hypothesis 1 + 4 a > 0 in (3.18) is very important. For, write equation (3.19) in
the form:
dx
dt
=
x
(cid:0)
(cid:0)
(cid:18)
2
r +
(cid:19)
1
2
1 + 4 a
4
r2 + O(x3; r x2; r2 x; r3):
(3.22)
Then, if 1 + 4 a < 0, the leading order terms on the right in this equation would be a negative de(cid:12)nite
quadratic form. This would imply that (x; r) = (0; 0) is the only critical point in a neighborhood of
the origin | i.e. that (x; r) = (0; 0) is an isolated critical point. As explained in remark 4, such
points are (generally) of little interest.
On the other | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
; 0) is an isolated critical point. As explained in remark 4, such
points are (generally) of little interest.
On the other hand, 1 + 4 a = 0 would lead to a double root of the right hand side in (3.19) (at
leading order). In principle this can be interpreted as a \limit case" of the transcritical bifurcation,
where the two branches of critical points that cross at the origin, become tangent there. However:
this is an extremely structurally unstable situation, where the local details of what actually happens
are controlled by high order terms | hence, again, this is a situation of little (general) interest.
Theorem 2 If the function f = f (x; r) is su(cid:14)ciently smooth, the assumptions in equations
(3.17) and (3.18) guarantee that the f = f (x; r) = 0 has (exactly) two branches of solutions in
a neighborhood of the origin. Furthermore, let this branches be given by x = (cid:31)1(r) and x = (cid:31)2(r).
Then (cid:31)1(r) = (cid:27)1 r + O(r2) and (cid:31)2(r) = (cid:27)2 r + O(r2).
Sketch of the proof: The calculations leading to equations (3.19) and (3.20) show that:
f (x; r) =
(x
(cid:0)
(cid:0)
(cid:27)1 r) (x
(cid:0)
2 r) + O(x ; r x2; r2 x; r3):
3
(cid:27)
(3.23)
Let x = r X. Then
f (x; r) =
r2 (X
(cid:0)
(cid:0)
(cid:27)1) (X
(cid:0)
(cid:27)2) + r3 O(X 3; X 2; X):
Thus, g = g(X; r) =
f
(cid:0) r2
satis(cid:12)es:
g(X; r) = (X
(cid:27)1) (X
(cid:0)
(cid:0)
(cid:27)2) + r h(X; r);
where h is some non-singular function. We note | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
(cid:0)
(cid:0)
(cid:27)2) + r h(X; r);
where h is some non-singular function. We note now that:
g((cid:27)p; 0) = 0 and gX((cid:27)p; 0) = ((cid:27)p
(cid:27)q) = 0;
(cid:0)
(3.24)
(3.25)
(3.26)
6
Rosales Bifurcations: baby normal forms.
10
where
p; q
=
1; 2
. Then the implicit function theorem guarantees that there exist smooth
g
f
g
f
solutions X = Xn(r) to the equations:
g(Xn; r) = 0 and Xn(0) = (cid:27)n | where n = 1 or n = 2.
Then (cid:31)n = r Xn | for n = 1; 2 | are the two functions in the theorem statement.
Why are there no other solutions? Well, once we have (cid:31)1 and (cid:31)2, we can write f = (x
where = (x; r) does not vanish at the origin | in fact: (0; 0) = fxx(0; 0) =
(cid:0)
(cid:31)1)(x
(cid:31)2) ,
(cid:0)
(cid:0)
2.
QED.
The arguments made to obtain equations (3.17) and (3.18) depend on the existence of the smooth
branch of critical points x = (cid:31)(r). But the existence of this branch is not then used in the arguments
leading to the normal form (3.21). We explicitly exploit this existence in what follows below, and
use it to get a better handle on transcritical bifurcations. Thus, without loss of generality:5
Assume that (cid:31)
0.
(cid:17)
(3.27)
Then we can write f = x G(x; r); where G(0; 0) = 0 | since fx(0; 0) = 0. Other than this, we
assume that G is \generic", so that its derivatives do not vanish. In particular, we normalize the
(cid:12)rst order derivatives so that | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
assume that G is \generic", so that its derivatives do not vanish. In particular, we normalize the
(cid:12)rst order derivatives so that Gr(0; 0) =
(cid:0)
Gx(0; 0) = 1 | this normalization is consistent with
the one used in (3.17), where we must take a = 0.
At this point we can invoke the implicit function theorem, that tells us that there is a function
x = z(r) such that G(z; r) = 0, with z(0) = 0 and dz=dr(0) = 1 | note that, in this case (cid:27)1 = 1
and (cid:27)2 = 0. Again, we use this function to factor G in the form G = (x
z(r)) H(x; r); where
(cid:0)
It follows then that we can write equation (1.1) in the form:
H(0; 0) =
1:
(cid:0)
Thus, if we introduce a new time T by dT =dt =
H;
and change parameter6 r
the equation is transformed into its Normal Form:
1 dx
H dt
=
(cid:0)
z(r) x + x2:
(cid:0)
dx
dT
= R x
x2:
(cid:0)
(3.28)
R = z(r);
!
(3.29)
The above is, clearly, the equivalent of theorem 1 for transcritical bifurcations:
a proof of the existence of a local transformation into normal form.
5If needed, the change of variables x
6Note that, for r and x small, R
x
!
(cid:0)
r and T
(cid:25)
(cid:31) will do the trick.
t. Thus both R and T are acceptable new variables.
(cid:25)
Rosales Bifurcations: baby normal forms.
11
Remark 8 Note that, because G above is \generic", the situation is structurally sta-
ble. However, this depends on the assumption that there is a branch of solutions. Transcritical
bifurcations are not structurally stable without an assumption of this type.
Problem 2 Assume that equation (3.27), and the normalizations immediately below it, apply.
Then, for x and r both small and O((cid:15 | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
Assume that equation (3.27), and the normalizations immediately below it, apply.
Then, for x and r both small and O((cid:15)) | where 0 < (cid:15)
1 | implement a reduction to normal
(cid:28)
form, by formally expanding the coordinate transformation up to two orders in (cid:15). To do so, write
the dynamical system in the expanded form
dx
dt
= r x
0
x2 + b x3 + b r x2 + b r2 x + O((cid:15)4);
(cid:15)3)
O(
{z
}
|
}
1
2
(cid:0)
(cid:15)2)
O(
{z
|
where b0, b1, and b2 are constants. Then expand the transformation
dt
dT
= 1 + (cid:12)0 x + (cid:12)1 R + O((cid:15)2);
r = R + (cid:13) R2 + O((cid:15)3);
and (cid:12)nd what values the coe(cid:14)cients (cid:12)0, (cid:12)1, and (cid:13) must take so that
dx
dT
= R x
(cid:0)
x2 + O((cid:15)4):
(3.30)
(3.31)
(3.32)
(3.33)
This process can be continued so as to make the error term in (3.33) of arbitrarily high order in (cid:15) |
provided that f in (1.1) has enough derivatives. We point out here that: the derivation lead-
ing to equation (3.29) requires f to have only second order continuous
derivatives to apply. By contrast, the process here requires progressively higher derivatives
to exist | it, however, has the advantage of giving explicit formulas for the transformation.
Problem 3 In problem 3.2.6 in the book by Strogatz, a process somewhat analogous to the one
in problem 2 is introduced. Basically, Strogatz tells you to do the following:
Consider the system
dx
dt
= R x
(cid:0)
x2 + a x3 + O(x4);
where R = 0 and a are constants. Introduce now a transformation (expanded) of the form
x = | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
a x3 + O(x4);
where R = 0 and a are constants. Introduce now a transformation (expanded) of the form
x = X + b X 3 + O(X 4);
(3.34)
(3.35)
6
Rosales Bifurcations: baby normal forms.
12
where b is a constant. Then show that b can be selected so that the equation for X has the form
dX
dt
= R X
(cid:0)
X 2 + O(X 4):
(3.36)
Thus the third order power is removed. The process can be generalized to remove arbitrarily high
powers of X from the equation.
Question: This process is simpler than the one employed in problem 2:
it involves neither trans-
forming the independent variable t, nor the parameter R. Why is it not appropriate for
reducing an equation to normal form near a transcritical bifurcation?
4 Pitchfork bifurcations.
We now go back to the considerations in the introduction (section 1), and add two extra hypotheses
at the bifurcation point (x0; r0) = (0; 0), one of them being the same one that was introduced in
section 3 for the transcritical bifurcations. Namely, we assume that:
A. There is a smooth branch x = (cid:31)(r) of critical points that goes through
the bifurcation point (0; 0).
B. The problem has right-left symmetry across the branch of critical
points x = (cid:31)(r). Speci(cid:12)cally, there is smooth bijection x
valid in a neighborhood of the branch x = (cid:31); such that:
X = X(x; r);
!
| Equation (1.1) is invariant under the transformation: X = f (X; r).
_
| (cid:31) is a (cid:12)xed curve for the transformation: X((cid:31)(r); r) = (cid:31)(r).
| x < (cid:31)(r) =
)
X > (cid:31)(r)
and
x > (cid:31)(r) =
)
X < (cid:31)(r).
Without any real loss of generality, assume that f (x; r) is an odd function
of x. Then (cid:31) = | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
Without any real loss of generality, assume that f (x; r) is an odd function
of x. Then (cid:31) = 0, X =
x, and (1.1) becomes:
(cid:0)
dx
dt
= x g((cid:16); r); where
(cid:16) = x2:
(4.37)
Rosales Bifurcations: baby normal forms.
13
The bifurcation condition (1.2) yields g(0; 0) = 0. Other than this, we assume that
g is generic. After appropriate re-scaling of the variables, we thus have
g(0; 0) = 0;
gr(0; 0) = 1;
and g(cid:16)(0; 0) = (cid:23) =
1:
(cid:6)
(4.38)
Note that the sign of g(cid:16)(0; 0) cannot be changed by scalings!
Problem 4 Expand g in (4.37) in powers of (cid:16) and r. Show that, in a small neighborhood of the
origin (of appropriate shape | see (2.9) and (3.19 { 3.20)), the leading order terms in the equation
reduce to the normal form for a pitchfork bifurcation:
dx
dt
= r x + (cid:23) x3:
Problem 5 In a manner analogous to the ones in problems 1 (saddle-node bifurcations) and
2 (transcritical bifurcations) introduce new variables (via formal expansions) x
R, that reduce equation (4.37) to normal form:
r
!
dX
dT
= R X + (cid:23) X 3:
X, t
!
!
T , and
HINT: First
(cid:15)
Expand g in a Taylor series g = r + (cid:23) (cid:16) + a2 r2 + a1 r (cid:16) + a0 (cid:16) 2 + : : :
and substitute
this expansion into the equation. Second
(cid:15)
Assume an appropriate size scaling for the variables x
and r in terms of a small parameter 0 < (cid:15)
1. This scaling should be consistent with the normal
form for the equation.7 It | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
in terms of a small parameter 0 < (cid:15)
1. This scaling should be consistent with the normal
form for the equation.7 It is very important since it assures that the ordering in the expansions is kept
(cid:28)
straight, without higher order terms being mixed with lower order ones. Third
Introduce expansions
for R = R(r) = r + o(r) and dT =dt = H(x2; r) = 1 + o(1). IMPORTANT: Notice that, because of
(cid:15)
the symmetry in the equation, it must be that x = X and the expansion for dT =dt must involve even
powers of x only. Fourth
(cid:15)
Substitute the expansions in the equation, and select the coe(cid:14)cients to
eliminate the higher orders beyond the normal form. Carry this computation to ONE ORDER
ONLY: What are the dominant terms in the expansions, beyond R
r and dT =dt
1.
(cid:24)
(cid:24)
Problem 6 Prove that a transformation with the properties stated in problem 5 actually exists |
this in a way similar to the one used in theorem 1 for saddle-node bifurcations, and above equation
(3.29) for transcritical bifurcations.
HINT: Show that g((cid:16); r) = 0 has a solution of the form (cid:16) =
(cid:23) R(r). Use this solution to \factor" g
(cid:0)
as a product, and substitute the result into the equation. It should then be obvious how to proceed.
7It is the same scaling required by problem 4.
Rosales Bifurcations: baby normal forms.
14
5 Problem Answers.
The problem answers will be handed out with the answers to the problem
sets.
MIT OpenCourseWare
http://ocw.mit.edu
18.385J / 2.036J Nonlinear Dynamics and Chaos
Fall 2014
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-385j-nonlinear-dynamics-and-chaos-fall-2014/068224e607b5dcde1629732a987ec9f2_MIT18_385JF14_BabyNormlFms.pdf |
THE MODULI SPACE OF CURVES
1. The moduli space of curves and a few remarks about its
construction
The theory of smooth algebraic curves lies at the intersection of many branches
of mathematics. A smooth complex curve may be considered as a Riemann sur
face. When the genus of the curve is at least 2, then it may also be considered as a
hyperbolic two manifold, that is a surface with a metric of constant negative curva
ture. Each of these points of view enhance our understanding of the classification
of smooth complex curves. While we will begin with an algebraic treatment of the
problem, we will later use insights offered by these other perspectives.
As a first approximation we would like to understand the functor
Mg : {Schemes} � {sets}
that assigns to a scheme Z the set of families (up to isomorphism) X � Z flat over
Z whose geometric fibers are smooth curves of genus g.
There are two problems with this functor. First, there does not exist a scheme
that represents this functor. Recall that given a contravariant functor F from
schemes over S to sets, we say that a scheme X(F ) over S and an element U (F ) ⊗
F (X(F )) represents the functor finely if for every S scheme Y the map
given by g � g�U (F ) is an isomorphism.
HomS (Y, X(F )) � F (Y )
Example 1.1. The main obstruction to the representability (in particular, to the
existence of a universal family) of Mg is curves with automorphisms. For instance,
fix a hyperelliptic curve C of genus g. Let π denote the hyperelliptic involution of
C. Let S be a K3-surface with a fixed point free involution i such that S/i is an
Enriques surface | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
with a fixed point free involution i such that S/i is an
Enriques surface E. To be very concrete let C be the normalization of the plane
curve defined by the equation y2 = p(x) where p(x) is a polynomial of degree 2g + 2
with no repeated roots. The hyperelliptic involution is given by (x, y) ⊂� (x, −y).
Let Q1, Q2, Q3 be three general ternary quadratic forms. Let the K3-surface S be
defined by the vanishing of the three polynomials Qi(x0, x1, x2) + Qi(x3, x4, x5) = 0
with the involution that exchanges the triple (x0, x1, x2) with (x3, x4, x5). Consider
the quotient of C × S by the fixed-point free involution π × i. The quotient is a
non-trivial family over the Enriques surface E; however, every fiber is isomorphic to
C. If Mg were finely represented by a scheme, then this family would correspond
to a morphism from E to it. However, this morphism would have to be constant
since the moduli of the fibers is constant. The trivial family would also give rise to
the constant family. Hence, Mg cannot be finely represented.
There are two ways to remedy this problem. The first way is to ask a scheme to
only coarsely represent the functor. Recall the following definition:
1
Definition 1.2. Given a contravariant functor F from schemes over S to sets,
we say that a scheme X(F ) over S coarsely represents the functor F if there is a
natural transformation of functors � : F | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
) over S coarsely represents the functor F if there is a
natural transformation of functors � : F � HomS (�, X(F )) such that
(1) �(spec(k)) : F (spec(k)) � HomS (spec(k), X(F )) is a bijection for every
algebraically closed field k,
(2) For any S-scheme Y and any natural transformation � : F � HomS (�, Y ),
there is a unique natural transformation
Φ : HomS (�, X(F )) � HomS (�, Y )
such that � = Φ ∩ �.
The main theorem of moduli theory asserts that there exists a quasi-projective
moduli scheme coarsely representing the functor Mg .
Alternatively, we can ask for a Deligne-Mumford stack that parameterizes smooth
curves. Below we will give a few details explaining how both constructions work.
There is another serious problem with the functor Mg . Most families of curves
in projective space specialize to singular curves. This makes it seem unlikely that
any moduli space of smooth curves will be proper. This, of course, is in no way
conclusive. It is useful to keep the following cautionary tale in mind.
Example 1.3. Consider a general pencil of smooth quartic plane curves specializing
to a double conic. To be explicit fix a general, smooth quartic F in P2 . Let Q be
a general conic. Consider the family of curves in P2 given by
Ct : Q2 + tF.
I claim that after a base change of order 2, the central fiber of this family may be
replaced by a smooth, hyperelliptic curve of genus 3. The total space of this family
is singular at the 8 points of intersection of Q and F . These are ordinary double
points of the surface. We can resolve these singularities by blowing up these points.
Figure | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
. These are ordinary double
points of the surface. We can resolve these singularities by blowing up these points.
Figure 1. Quartics specializing to a double conic.
We now make a base change of order 2. This is obtained by taking a double cover
branched at the exceptional curves E1, . . . , E8. The inverse image of the proper
transform of C0 is a double cover of P1 branched at the 8 points. In particular,
2
it is a hyperelliptic curve of genus 3. The inverse image of each exceptional curve
is rational curve with self-intersection −1. These can be blown-down. Thus, after
base change, we obtain a family of genus 3 curves where every fiber is smooth.
Exercise 1.4. Consider a general pencil of quartic curves in the plane specializing
to a quartic with a single node. Show that it is not possible to find a flat family
of curves (even after base change) that replaces the central fiber with a smooth
curve. (Hint: After blowing up the base points of the pencil, we can assume that
the total space of the family is smooth and the surface is relatively minimal. First,
assume we can replace the central fiber by a smooth curve without a base change.
Use Zariski’s main theorem to show that this is impossible. Then analyze what
happens when we perform a base change.)
The previous exercise shows that the coarse moduli scheme of smooth curves
(assuming it exists) cannot be proper. Given that curves in projective space can
become arbitrarily singular, it is an amazing fact that the moduli space of curves
can be compactified by allowing curves | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
it is an amazing fact that the moduli space of curves
can be compactified by allowing curves that have only nodes as singularities.
Definition 1.5. Consider the tuples (C, p1, . . . , pn) where C is a connected at
worst nodal curve of arithmetic genus g and p1, . . . , pn are distinct smooth points
of C. We call the tuple (C, p1, . . . , pn) stable if in the normalization of the curve
any rational component has at least three distinguished points—inverse images of
nodes or of pi—and any component of genus one has at least one distinguished
point.
Note that for there to be any stable curves the inequality 2g − 2 + n > 0 needs
to be satisfied.
Definition 1.6. Let S be a scheme. A stable curve over S is a proper, flat family
C � S whose geometric fibers are stable curves.
Theorem 1.7 (Deligne-Mumford-Knudsen). There exists a coarse moduli space
Mg,n of stable n-pointed, genus g curves. Mg,n is a projective variety and contains
the coarse moduli space Mg,n of smooth n-pointed genus g curves as a Zariski open
subset.
One way to construct the coarse moduli scheme of stable curves is to consider
pluri-canonically embedded curves, that is curves embedded in projective space
P(2n−1)(g−1)−1 by their complete linear system |nKC | for n → 3. A locally closed
subscheme K of the Hilbert scheme parameterizes the locus of n-canonical curves
of genus g. The group P GL((2n − 1)(g | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
scheme parameterizes the locus of n-canonical curves
of genus g. The group P GL((2n − 1)(g − 1)) acts on K. The coarse moduli scheme
may be constructed as the G.I.T. quotient of K under this action. The proof that
this construction works is lengthy. Below we will briefly explain some of the main
ingredients. We begin by recalling the key features of the construction of the Hilbert
scheme. We then recall the basics of G.I.T..
2. A few remarks about the construction of the Hilbert scheme
Assume in this section that all schemes are Noetherian. Recall that the Hilbert
functor is a contravariant functor from schemes to sets defined as follows:
Definition 2.1. Let X � S be a projective scheme, O(1) a relatively ample line
bundle and P a fixed polynomial. Let
HilbP (X/S) : {Schemes/S} � {sets}
3
be the contravariant functor that associates to an S scheme Y the subschemes of
X ×S Y which are proper and flat over Y and have the Hilbert polynomial P .
A major theorem of Grothendieck asserts that the Hilbert functor is repre
sentable by a projective scheme.
Theorem 2.2. Let X/S be a projective scheme, O(1) a relatively ample line bundle
and P a fixed polynomial. The functor HilbP (X/S) is represented by a morphism
HilbP (X/S) is projective over S.
u : UP (X/S) � HilbP (X/S).
I will explain some of the ingredients that go into the proof of this theorem,
leaving you to read [Gr], [Mum2], [K], [Se] and the | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
,
leaving you to read [Gr], [Mum2], [K], [Se] and the references contained in those
accounts for complete details.
Let us first concentrate on the case X = Pn and S = Spec(k), the spectrum of a
field k. A subscheme of projective space is determined by its equations. The poly
nomials in k[x0, . . . , xn] that vanish on a subscheme form an infinite-dimensional
subvector space of k[x0, . . . , xn]. Suppose we knew that a finite-dimensional sub
space actually determined the schemes with a fixed Hilbert polynomial. Then
we would get an injection of the schemes with a fixed Hilbert polynomial into a
Grassmannian. We have already seen that the Grassmannian (together with its
tautological bundle) represents the functor classifying subspaces of a vector space.
Assuming the image in the Grassmannian is an algebraic subscheme, we can use
this subscheme to represent the Hilbert functor.
Given a proper subscheme Y of Pn and a coherent sheaf F on Y , the higher
cohomology H i(Y, F (m)), i > 0, vanishes for m sufficiently large. The finiteness
that we are looking for comes from the fact that if we restrict ourselves to ideal
sheaves of subschemes with a fixed Hilbert polynomial, one can find an integer
m depending only on the Hilbert polynomial (and not on the subscheme) that
works simultaneously for the ideal sheaf of | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
polynomial (and not on the subscheme) that
works simultaneously for the ideal sheaf of every subscheme with a fixed Hilbert
polynomial.
Theorem 2.3. For every polynomial P , there exists an integer mP depending only
on P such that for every subsheaf I ≥ O with Hilbert polynomial P and every
integer k > mP
Pn
(1) hi(Pn, I(k)) = 0 for i > 0;
(2) I(k) is generated by global sections;
(3) H 0(Pn, I(k)) ∗ H 0(Pn , O(1)) � H 0(Pn, I(k + 1)) is surjective.
How does this theorem help? Let Y ≥ Pn be a closed subscheme with Hilbert
polynomial P . Choose k > mP . By item (2) of the theorem, IY (k) is generated by
global sections. Consider the exact sequence
0 � IY (k) � OPn (k) � OY (k) � 0.
This realizes H 0(Pn, IY (k)) as a subspace of H 0(Pn , OPn (k)). This subspace de
termines IY (k) and hence the subscheme Y . Since k depends only on the Hilbert
polynomial, we get an injection to G(P (k), H 0(Pn , OPn (k)). The image has a natu
ral scheme structure. This scheme together with the restriction of the tautological
bundle to it, represents the Hilbert functor. I will now fill in some of the details,
4
leaving most of them to you. Let us begin with a sketch of the proof of the theorem.
Definition 2.4. A coherent | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
you. Let us begin with a sketch of the proof of the theorem.
Definition 2.4. A coherent sheaf F on Pn is called (Castelnuovo-Mumford) m-
regular if H i(Pn , F (m − i)) = 0 for all i > 0.
Proposition 2.5. If F is an m-regular coherent sheaf on Pn , then
(1) hi(Pn , F (k)) = 0 for i > 0 and k + i → m.
(2) F (k) is generated by global sections if k → m.
(3) H 0(Pn , F (k)) ∗ H 0(Pn , O(1)) � H 0(Pn , F (k + 1)) is surjective if k → m.
Proof. The proposition is proved by induction on the dimension n. When n = 0,
the result is clear. Take a general hyperplane H and consider the following exact
sequence
0 � F (k − 1) � F (k) � FH (k) � 0.
When k = m − i, the associated long exact sequence of cohomology gives that
H i(F (m − i)) � H i(FH (m − i)) � H i+1(F (m − i − 1)).
In particular, if F is m-regular on Pn , then so is FH on Pn−1 . Now we can prove
the first item by induction on k. Now consider the similar long exact sequence
H i+1(F (m − i − 1) � H i+1(F (m − i)) � H i+1(FH (m − i − 1)).
The first group vanishes by induction on dimension and the third one vanishes by
the assumption that F is m regular for i → 0. We conclude that F is m + 1 regular.
Hence by induction | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
the assumption that F is m regular for i → 0. We conclude that F is m + 1 regular.
Hence by induction k regular for all k > m. This proves item (1).
Consider the commutative diagram
H 0(F (k − 1)) ∗ H 0(OPn (1))
H 0(F (k − 1))
g
H 0(F (k))
u
v
H 0(FH (k − 1)) ∗ H 0(OH (1))
f
H 0(FH (k))
The map u is surjective by the regularity assumption. The map f is surjective by
induction on the dimension. It follows that v ∩ g is also surjective. Since the image
of H 0(F (k − 1)) is contained in the image of g, claim (3) follows.
It is easy to deduce (2) from (3).
�
The proof of the theorem is concluded if we can show that the ideal sheaves
of proper subchemes of Pn with a fixed Hilbert polynomial are mP -regular for an
integer depending only on P . This claim also follows by induction on the dimension
n. Choose a general hyperplane H and consider the exact sequence
0 � I(m) � I(m + 1) � IH (m + 1) � 0.
IH is a sheaf of ideals so we may use induction on the dimension.
Assume the Hilbert polynomial is given by
n
P (m) =
m
ai
i
� �
.
�
i=0
5
�
�
�
�
We then have
ψ(IH (m + 1)) = ψ(I(m + 1)) − ψ(I(m))
=
n
�
i=0
ai
��
m + 1
i
−
� � ��
m
i
n−1
=
ai+1
�
i=0
m
i
� �
Assuming the result by induction, we get an integer m1 | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
+1
�
i=0
m
i
� �
Assuming the result by induction, we get an integer m1 depending only on the
coefficients a1, . . . , an such that IH has that regularity. Considering the long exact
sequence associated to our short exact sequence, we see that H i(I(m)) is isomorphic
to H i(I(m + 1) as long as i > 1 and m > m1 − i. Since by Serre’s theorem these
cohomologies vanish when m is large enough, we get the vanishing of the higher
cohomology groups. For i = 1 we only get that h1(I(m)) is strictly decreasing for
m → m1 − 1. We conclude that I is m1 + h1(I(m1 − 1))-regular. However, since I
is an ideal sheaf we can bound the latter term as follows
h1(I(m1 − 1)) = h0(I(m1 − 1)) − ψ(I(m1 − 1)) ∼ h0(OPn (m1 − 1)) − ψ(I(m1 − 1)).
This clearly depends only on the Hilbert polynomial; hence concludes the proof of
Theorem 2.3.
Now we indicate how one proceeds to deduce Theorem 2.2. So far we have given
an injection from the set of subshemes of Pn with a fixed Hilbert polynomial P
to the Grassmannian G(P (m), H 0(Pn , OPn (m))) for any m > mP by sending the
subscheme to the P (m)-dimensional subspace H 0(Pn, I(m)) of H 0(Pn , OPn (m))).
Of course, this subspace uniquely determines the subscheme. We still have to show
that the image has a natural scheme structure and that this subscheme represents
the Hilbert functor. For this purpose we will use flattening stratifications | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
scheme structure and that this subscheme represents
the Hilbert functor. For this purpose we will use flattening stratifications.
Recall that a stratification of a scheme S is a finite collection S1, . . . , Sj of locally
closed subschemes of S such that
S = S1 � · · · � Sj
is a disjoint union of these subschemes.
Proposition 2.6. Let F be a coherent sheaf on Pn × S. Let S and T be Noetherian
schemes. There exists a stratification of S such that for all morphisms f : T � S,
(1 × f )�F to Pn × T is flat over T if and only if the morphism factors through the
stratification.
This stratification is called the flattening stratification (see Lecture 8 in [Mum2]
for the details). To prove it one uses the fact that if f : X � S is a morphism of
finite type, S is integral and F is any coherent sheaf on X, then there is a dense
open subset U of S such that the restriction of F to f −1(U ) is flat over U . A
corollary is that S can be partitioned into finitely many locally closed subsets Si
such that giving each the reduced induced structure, the restriction of F to X ×S Si
is flat over Si.
We can partition S to locally closed subschemes as in the previous paragraph.
Only finitely many Hilbert polynomials Pioccur. We can conclude that there is an
integer m such that if l → m, then
and
H i(Pn(s), F (s)(l)) | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
can conclude that there is an
integer m such that if l → m, then
and
H i(Pn(s), F (s)(l)) = 0
βS�F (l) ∗ k(s) � H 0(Pn(s), F (s)(l))
6
is an isomorphism, where βS denotes the natural projection to S.
Next one observes that (1 × f )�F is flat over T if and only if f �βS�F (l) is locally
free for all l → m. For each l we find the stratification of S such that Sl,j the
sheaf f �βS�F (l) is locally free of rank j. Note that there is the following equality
between subsets of S
≤l�mSupp[Sl,j ] = ≤m+n�l�mSupp[Sl,j ].
This is because the Hilbert polynomials have degree at most n.
For each integer h → 0, there is a well-defined locally closed subscheme of S
defined by
≤0�r�hSr,Pi (m+r).
When h → n, these form a decreasing sequence of subschemes with the same sup
port. Therefore, they stabilize. These give us the required stratification.
The flattening stratification allows us to put a scheme structure on the image of
our map to the Grassmannian. More precisely, consider the incidence correspon
dence
I ≥ Pn × G(P (mP ), H 0(Pn , OPn (mP ))).
The incidence correspondence has two projections
β1 : I � Pn
and
β2 : I � G(P (mP ), H 0(Pn , OPn (mP ))).
For the rest of this section we will abbreviate G(P (mP ), H 0(Pn , OPn (mP ))) simply
by G. | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
rest of this section we will abbreviate G(P (mP ), H 0(Pn , OPn (mP ))) simply
by G. β�
2 T (−mP ) where T is the tautological bundle on G is an idea sheaf of OPn ×G.
Let us denote the corresponding subscheme by Y . The flattening stratification of
OY over G gives a subscheme HP of G corresponding to the Hilbert polynomial P .
(Note that this is the scheme structure that we put on the set we earlier obtained.)
The claim is that HP represents the Hilbert functor and the universal family is the
restriction W of Y to the inverse image of HP .
Suppose we have a subscheme X ≥ Pn × S mapping to S via f and flat over S
(and suppose the Hilbert polynomial is P ). We obtain an exact sequence
0 � f�IX (mP ) � f�OPn×S (mP ) � f�OX (mP ) � 0.
By the universal property of the Grassmannian G, this induces a map g : S � G.
Since
f�IX (m) = g �β2�IY (m)
for m sufficiently large, we see that (1 × g)�OY is flat with Hilbert polynomial
P , hence g factors through HP by the definition of the flattening stratification.
Moreover, X is simply S ×HP W . This concludes the construction of HilbP (Pn/S).
Exercise 2.7. Verify the details of the above construction.
So far we have constructed the Hilbert scheme as a quasi-projective subscheme
of the Grassmannian. To prove that it is projective it suffices to check that it is
proper. This is done by checking the valuative criterion of properness. This follows
from the following proposition [Ha] III.9. | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
proper. This is done by checking the valuative criterion of properness. This follows
from the following proposition [Ha] III.9.8.
7
Proposition 2.8. Let X be a regular, integral scheme of dimension one. Let p ⊗ X
be a closed point. Let Z ≥ Pn
X−p be a closed subscheme flat over X − p. Then there
exists a unique closed subscheme Z ⊗ Pn flat over X, whose restriction to Pn
is
X
Z.
X−p
Exercise 2.9. Deduce from the proposition that the Hilbert scheme we constructed
is projective.
Exercise 2.10. For a projective scheme X/S construct HilbP (X/S) as a locally
closed subscheme of HilbP (Pn/S).
Exercise 2.11. Suppose X and Y are projective schemes over S. Assume X is
flat over S. Let Hom(X, Y ) be the functor that associates to any S scheme T the
set of morphisms
X ×S T � Y ×S T.
Using our construction of the Hilbert scheme and noting that a morphism may be
identified with its graph construct a scheme that represents the functor Hom(X, Y ).
2.1. Examples of Hilbert schemes. In this subsection we would like to give
some explicit examples of Hilbert schemes.
Example 2.12. Consider the Hilbert scheme associated to a projective variety X
and the Hilbert polynomial 1. Then the Hilbert scheme is simply X.
Exercise 2.13. Show that if C is a smooth curve, then Hilbn(C) is simply the
symmetric n-th power of C. In particular, Hilbn(P1) = Pn
Exercise 2.14. Show that the Hilbert scheme of hypersurfaces of degree d in Pn
is isomorphic to P(n+d)−1 | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
the Hilbert scheme of hypersurfaces of degree d in Pn
is isomorphic to P(n+d)−1 .
Example 2.15 (The Hilbert scheme of conics in P3). Any degree 2 curve is nec
essarily the complete intersection of a linear and quadratic polynomial. Moreover,
the linear polynomial is uniquely determined. We thus obtain a map
d
Hilb2n−1(P3) � P3� .
The fibers of this map are Hilb2n−1(P2) which is isomorphic to P5 . We conclude by
Zariski’s main theorem that that Hilb2n−1(P3) is the P5 bundle P(Sym2T �) � P3� .
Of course, in all this discussion we needed the fact that Hilb2n−1(P3) is reduced.
Theorem 2.16. Let X be a projective scheme over a field k and Y ≥ X be a closed
subscheme, then the Zariski tangent space to Hilb(X) at [Y ] is naturally isomorphic
to HomY (IY /I 2
Y , OY ).
In particular, in our case the dimension of T Hilb2n−1(P3) = h0(NC/P3 ) = 8.
Hence Hilb2n−1(P3) is reduced (in fact smooth). Hilb2n−1(P3) is one of the few
examples where we can answer many of the geometric questions we can ask about
a Hilbert scheme.
We can use the Hilbert scheme of conics to solve the following question:
Question 2.17. How many conics in P3 intersect 8 general lines in P3?
As in the case of Schubert calculus, we can try to calculate this number as an
intersection in the cohomology ring. The cohomology ring of a projective bundle
over a smooth variety is easy to describe in | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
homology ring. The cohomology ring of a projective bundle
over a smooth variety is easy to describe in terms of the chern classes of the bundle
and the cohomology ring of the variety.
8
Theorem 2.18. Let E be a rank n vector bundle over a smooth, projective variety
ci(E)ti . Let α denote the
X. Suppose that the chern polynomial of E is given by
first chern class of the dual of the tautological bundle over PE. The cohomology of
PE is isomorphic to
�
H �(PE) �
=
H �(X) [α]
< α n + α n−1c1(E) + · · · + cn(E) = 0 >
If you are not familiar with chern classes, see the handout about chern classes.
Using Theorem 2.18 we can compute the cohomology ring of Hilb2n−1(P3). Recall
that T � on P3� is a rank 3 vector bundle with chern polynomial
c(T �) = 1 + h + h2 + h3 .
Using the splitting principle we assume that the polynomial splits into three linear
factors
(1 + x)(1 + y)(1 + z).
Then the chern polynomial of Sym2(T �) is given by
(1 + 2x)(1 + 2y)(1 + 2z)(1 + x + y)(1 + x + z)(1 + y + z).
Multiplying this out and expressing it interms of the elementary symmetric poly
nomials in x, y, z, we see that
c(Sym 2(T �)) = 1 + 4h + 10h2 + 20h3 .
It follows that the cohomology ring of Hilb2n−1(P3) is given as follows:
H �(Hilb2n−1(P3)) �
=
Z[h, α]
< h4 , α 3 + 4hα 2 + 10h2 | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
3)) �
=
Z[h, α]
< h4 , α 3 + 4hα 2 + 10h2α + 20h3 >
The class of the locus of conics interseting a line is given by 2h + α. This can be
checked by a calculation away from codimension at least 2. Consider the locus of
planes in P3� that do not contain the line l. Over this locus there is a line bundle
that associates to each point (H, Q) on Hilb2n−1(P3) the homogeneous quadratic
polynomials modulo those that vanish at H ≤ l. This line bundle is none other
than the pull-back of OP3� . The tautological bundle over Hilb2n−1(P3) maps by
evaluation. The locus where the evaluation vanishes is the locus of conics that
intersect l. Hence the class is the difference of the first chern classes. Finally, we
compute (2h + α)8 using the presentation of the ring to obtain 92.
Over the complex numbers we can invoke Kleiman’s theorem to deduce that
there are 92 smooth conics intersecting 8 general lines in P3 .
Exercise 2.19. Calculate the number of conics that intersect 8 − 2i lines and
contain i points for 0 ∼ i ∼ 3.
Exercise 2.20. Calculate the class of conics that are tangent to a plane in P3 .
Find how many conics are tangent to a general plane and intersect 7 general lines.
Exercise 2.21. Generalize the previous discussion to conics in P4 . Calculate the
numbers of conics that intersect general 11 | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
.21. Generalize the previous discussion to conics in P4 . Calculate the
numbers of conics that intersect general 11 − 2i − 3j planes, i lines and j points.
Example 2.22 (The Hilbert scheme of twisted cubics in P3). The Hilbert poly
nomial of a twisted cubic is 3t + 1. This Hilbert scheme has two components.
A general point of the first component parameterizes a smooth rational curve of
degree 3 in P3 . A general point of the second component parameterizes a degree
9
3 plane curve together with a point in P3 . Note that the dimension of the first
component is 12, whereas the dimension of the second component is 15. Hence
the Hilbert scheme is not pure dimensional. The component of the Hilbert scheme
parameterizing the smooth rational curves has been studies in detail. In fact, that
component is smooth.
Exercise 2.23. Describe the subschemes of P3 that are parameterized by the com
ponent of the Hilbert scheme that parameterizes smooth rational curves of degree
3 in P3 .
Piene and Schlessinger proved that the component of the Hilbert scheme pa
rameterizing twisted cubics is smooth. In analogy with our analysis of the Hilbert
scheme of conics we can try to compute invariants of cubics using the Hilbert
scheme. Unfortunately, this turns out to be very difficult.
Problem 2.24. Calculate the number of twisted cubics intersecting 12 general
lines in P3 .
Problem 2.25. Calculate the number of twisted cubics that are tangent to 12
general quadric hypersurfaces | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
. Calculate the number of twisted cubics that are tangent to 12
general quadric hypersurfaces in P3 . (Hint: There are 5,819,539,783,680 of them.)
Towards the end of the course we will see how to use the Kontsevich moduli space
to answer these questions.
Unfortunately, Hilbert schemes are often unwieldy schemes to work with. They
often have many irreducible components. It is hard to compute the dimensions of
these components. Even components of the Hilbert scheme whose generic point
parameterizes smooth curves in P3 may be everywhere non-reduced.
Example 2.26 (Mumford’s example). Mumford showed that there exists a com
ponent of the Hilbert scheme parameterizing smooth curves of degree 14 and genus
24 in P3 that is non-reduced at the generic point of that component. See [Mum1]
or [HM] Chapter 1 Section D.
The pathological behavior of most Hilbert schemes make them hard to use for
studying the explicit geometry of algebraic varieties. In fact, the Hilbert schemes
often exhibit behavior that is arbitrarily bad. For instance, R. Vakil recently proved
that all possible singularities occur in some component of the Hilbert scheme of
curves in projective space.
Theorem 2.27 (Murphy’s Law). Every singularity class of finite type over SpecZ
occurs in a Hilbert scheme of curves in some projective space.
3. Basics about curves
Here we collect some basic facts about stable curves.
If β : C � S is a stable curve of genus g over a scheme S, then C has a relative
dualizing sheaf �C/S with the following properties
(1) The formation of �C/S commutes with base change.
(2) If S = Spec k where k is an algebraically closed field and C | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
C/S commutes with base change.
(2) If S = Spec k where k is an algebraically closed field and C is the normal
ization of C, then �C/S may be identified with the sheaf of meromorphic
differentials on C that are allowed to have simple poles only at the inverse
image of the nodes subject to the condition that if the points x and y lie
over the same node then the residues at these two points must sum to zero.
˜
˜
10
(3) In particular, if C is a stable curve over a field k, then H 1(C, � C/k ) = 0
if n → 2 and � ∗n
is very ample for n → 3. When n = 3 we obtain a
C/k
tri-canonical embedding of stable curves to P5g−6 with Hilbert polynomial
P (m) = (6m − 1)(g − 1).
∗n
To see the third property observe that every irreducible component E of a stable
curve C either has arithmetic genus 2 or more, or has arithmetic genus one but
meets the other components in at least one point, or has arithmetic genus 0 and
meets the other components in at least three points. Since �C/k ∗ OE is isomorphic
to �E/k (
i Qi) where Qi are the points where E meets the rest of the curve. Since
this sheaf has positive degree it is ample on each component E of C, hence it is
i Qi) has positive degree on each component, hence �1−n ∗ OE has
ample. �E/k(
no sections for any n → 2. By Serre duality, it | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
. �E/k(
no sections for any n → 2. By Serre duality, it follows that H 1(C, � C/k ) = 0. To
show that when n → 3, �C/k is very ample, it suffices to check that �C/k separates
points and tangents.
C/k
∗n
�
�
∗n
∗n
Exercise 3.1. Check that when n → 3, �C/k separates points and tangents.
∗n
4. Stable reduction
Stable reduction was originally proved by Deligne and Mumford using the ex
istence of stable reduction for abelian varieties [DM]. [HM] Chapter 3 Section C
contains a beautiful account which we will summarize below.
The main theorem is the following:
Theorem 4.1 (Stable reduction). Let B be the spectrum of a DVR with function
field K. Let X � B be a family of curves with n sections χ1, . . . , χn such that
the restriction XK � Spec K is an n-pointed stable curve. Then there exists a
finite field extension L/K and a unique stable family X � B ×K L with sections
χn such that the restriction to Spec L is isomorphic to XK ×K L.
˜
χ1, . . . , ˜
˜
One can algorithmically carry out stable reduction (at least in characteristic
zero). Since stable reduction is an essential tool in algebraic geometry we begin by
giving some examples. We will then sketch the proof.
Example 4.2. Fix a smooth curve C of genus g → 2. Let p ⊗ C be a fi | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
.2. Fix a smooth curve C of genus g → 2. Let p ⊗ C be a fixed point
and let q be a varying point. More precisely, we have the family C × C � C with
two sections χp : C � C × C mapping a point q to (q, p) and χq : C � C × C
mapping q to (q, q). All the fibers are stable except when p = q. To obtain a stable
family, we blow up C × C at (p, p). The resulting picture looks as follows (see
Figure 2):
There is an algorithm that produces the stable reduction in characteristic zero.
This algorithm is worth knowing because the explicit calculation of the stable limit
often has applications to geometric problems.
Step 1. Resolve the singularities of the total space of the family. The result of
this step is a smooth surface X mapping to our initial surface. Moreover, we can
assume that the support of the central fiber is a normal-crossings divisor.
Step 2. After Step 1 at every point of the central fiber the pull-back of the
uniformizer may be expressed as xa for some a > 0 at a smooth point or xayb for
11
q
p
p
q
Figure 2. Stable reduction when two marked points collide.
a pair a, b > 0 at a node. Make a base change of order p for some prime dividing
the multiplicity of a multiple component of the fiber.
Step 3. Normalize the resulting surface.
Suppose the central fiber was of the form
i niCi The effect of doing steps 2
and 3 is to take a branched cover of the surface X branched along the reduction of
the | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
of doing steps 2
and 3 is to take a branched cover of the surface X branched along the reduction of
the divisor forming the central fiber modulo p. Repeat steps 2 and 3 until all the
components occuring in the central fiber appear with multiplicity 1.
�
Step 4. Contract the rational components of the central fiber that are not stable.
Sketch of proof of Theorem 4.1. We will assume that n = 0 and then make some
remarks about how to modify the statements here to obtain the general case. Let
R be a DVR with uniformizer z. Let φ ⊗ B = Spec R be the generic point. We
are assuming that our family X� is a stable curve of genus g.
Consider regular, proper B-schemes that extend X� . By results of Abhyankar
[Ab] about resolutions of surface singularities there exists a unique relatively mini
mal model of X� . Consider the completion of the local ring at a node of the special
fiber. This ring is isomorphic to R[[x, t]]/(xy −zn) for some integer n → 1. This ring
is not regular for n > 1. We can desingularize it in a sequence of ∈n/2◦ blow-ups.
Over the node we get a sequence of −2-curves.
Let X be a proper, flat regular surface extending X� . Let Ci, i = 1, . . . , n, be
the components of the special fiber. Suppose they occur with multiplicity ri. Recall
the following basic facts about the components of the special fiber
(1) The special fiber C is connected and the multiplicities ri > 0 for all i.
(2) Ci · Cj → 0 for all i �= j and Ci · C = 0 for all i.
(3) | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
· Cj → 0 for all i �= j and Ci · C = 0 for all i.
(3) If K is the canonical class, then the arithmetic genus of Ci is given by the
genus formula as
C 2 + Ci · K
.
2
(4) The intersection matrix Ci ·Cj is a negative definite symmetric matrix. The
aiCi with the property that Z 2 = 0 are
1 +
i
only linear combinations Z =
rational multiples of C.
�
One can divide the components Ci of the special fiber into the following categories
�
12
Example 4.3. Suppose we have a general pencil of smooth curves of genus g in
P2 specializing to a curve with an ordinary m-fold point. We may write down the
equation of such a family as F + tG where G is the equation defining a general
curve of genus g and F locally has the form
m
(y − aix) + h. o. t.
�
i=1
with distinct ai. To perform stable reduction we blow-up the m-fold point. In the
resulting surface the proper transform C of the central fiber is smooth of genus
g − m(m − 1)/2, but the exceptional divisor is a P1 that meets C in m points and
occurs with multiplicity m. We make a base change of order m. We get an m-fold
cover of this P1 totally ramified at the m points of intersection with C. By the
Riemann-Hurwitz formula this is a genus m(m − 3)/2 + 1. The stable limit then is
as shown in the figure.
Exercise 4.4. Suppose Ct is a general pencil of smooth genus g plane curves
ac | https://ocw.mit.edu/courses/18-727-topics-in-algebraic-geometry-intersection-theory-on-moduli-spaces-spring-2006/06c4e979c3dadcba7d0f3dd69efdb315_const.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.