text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
u(x,t))
Since r = t, we can rewrite the solution as being parametrized by time t and the
marker s of the initial value of x:
x (t; s) = (1 + cf (s)) t + s,
u (; s) = f (s)
We have written u (; s) to make clear that u depends only on the parameter s. In
other words, u is constant along characteristics!
To solve for the density u at a fixed time t = t0, we (1) choose values for s, (2)
compute x (t0; s), u (; s) at these s values and (3) plot u (; s) vs. x (t0; s). Since f (s)
is piecewise linear in s (i.e. composed of lines), x is therefore piecewise linear in s,
and hence at any given time, u = f (s) is piecewise linear in x. Thus, to find the
solution, we just need to follow the positions of the intersections of the lines in f (s)
(labeled by s = −1, 0, 1) in time. We then plot the positions of these intersections
along with their corresponding u value in the u vs. x plane and connect the dots to
obtain a plot of u (x, t). Note that for c = 1, the s = −1, 0, 1 characteristics are given
by
s = −1 : x = (1 + cf (−1)) t − 1 = 2t − 1
s = 0 : | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
(1 + cf (−1)) t − 1 = 2t − 1
s = 0 : x = (1 + cf (0)) t + 0 = 3t
s = 1 : x = (1 + cf (1)) t + 1 = 2t + 1
These are plotted in Figure 2. The following tables are useful as a plotting aid:
t =
1
2
s = −1
u =
x =
1
0
1
0
2
1
3
2 2
6
t = ts = 1
s = −1
u =
x =
1
1
0
2
3
1
1
3
A plot of u (x, 1/2) is made by plotting the three points (x, u) from the table for
t = 1/2 and connecting the dots (see middle plot in Figure 3). Similarly, u (x, ts) =
u (x, 1) is plotted in the last plot of Figure 3.
Repeating the above steps for c = −1, the s = −1, 0, 1 characteristics are given
by
s = −1 : x = (1 − f (−1)) t − 1 = −1
s = 0 : x = (1 − f (0)) t + 0 = −t
s = 1 : x = (1 − f (1)) t + 1 = 1
These are plotted in Figure 4. We then construct the tables:
t =
1
2
t = ts = 1
s = −1
u =
x =
1
-1 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
2
t = ts = 1
s = −1
u =
x =
1
-1 − 1
0
2
1
1
2 1
s = −1
u =
1
x = −1 −1
0
2
1
1
1
As before, plots of u (x, 1/2) and u (x, 1) are made by plotting the three points (x, u)
from the tables and connecting the dots. See middle and bottom plots in Figure
5. Note that for c = 1 the wave front steepened, while for c = −1 the wave tail
steepened. This is easy to understand by noting how the speed changes relative to
the height u of the wave. When c = 1, the local wave speed 1 + u is larger for higher
parts of the wave. Hence the crest catches up with the trough ahead of it, and the
shock forms on the front of the wave. When c = −1, the local wave speed 1 − u is
larger for higher parts of the wave; hence the tail catches up with the crest, and the
shock forms on the back of the wave.
4 Solution to traffic flow problem
[Oct 31, 2005]
The traffic flow PDE (5) is
ut + (1 − 2u) ux = 0
(11)
7
1
t 0.5
0
−3
−2
−1
0
x
1
2
3
Figure 2: Plot of characteristics for | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
−3
−2
−1
0
x
1
2
3
Figure 2: Plot of characteristics for c = 1.
2
1
)
0
,
x
(
u
0
−3
2
)
5
0
.
,
x
(
u
1
0
−3
2
)
1
,
x
(
u
1
0
−3
−2
−1
0
1
2
3
4
−2
−1
0
1
2
3
4
−2
−1
0
1
2
3
4
x
Figure 3: Plot of u(x, t0) with c = 1 for t0 = 0, 0.5 and 1.
8
1
t 0.5
0
−3
−2
−1
0
x
1
2
3
Figure 4: Plot of characteristics with c = −1.
2
1
)
0
,
x
(
u
0
−4
2
)
5
0
.
,
x
(
u
1
0
−4
2
)
1
,
x
(
u
1
0
−4
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
x
Figure 5: Plot of u(x, t0) with c = −1 for t0 = 0, 0.5 and 1.
9
and has form (6) with (A, B, C) = (1 − 2u, 1, 0). The characteristic curves satisfy (9)
and (10)
∂x
∂r
∂t
∂r
∂u
∂r
= 1 − 2u,
x (0) = s,
= 1,
= 0,
t (0) = 0 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
u,
x (0) = s,
= 1,
= 0,
t (0) = 0,
u (0) = f (s) .
Integrating gives the parametric equations
t = r + c1,
u = c2,
x = (1 − 2u) r + c3 = (1 − 2c2) r + c3
Imposing the ICs gives c1 = 0, c2 = f (s), c3 = s, so that
t = r,
u = f (s) ,
x = (1 − 2f (s)) r + s = (1 − 2f (s)) t + s
(12)
We can now write
x (t; s) = (1 − 2f (s)) t + s,
u (; s) = f (s)
Again, the traffic density u is constant along characteristics. Note that this would
change if, for example, there was a source/sink term in the traffic flow equation (11),
i.e.
ut + (1 − 2u) ux = h(x, t, u)
where h(x, t, u) models the traffic loss / gain to exists and on-ramps at various posi
tions.
4.1 Example : Light traffic heading into heavier traffic
Consider light traffic heading into heavy traffic, and model the initial density as
α,
x ≤ 0
u (x, 0) = f (x) =
(cid:0)
3 ,
4
(cid:1)
3
4 − α x + α, 0 ≤ x | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
(cid:0)
3 ,
4
(cid:1)
3
4 − α x + α, 0 ≤ x ≤ 1
x ≥ 1
(13)
where 0 ≤ α ≤ 3/4. The lightness of traffic is parametrized by α. We consider the
case of light traffic α = 1/6 and moderate traffic α = 1/3.
From (12), the characteristics are [DRAW]
(1 − 2α) t + s,
s ≤ 0
(1 − 2α − 2 (3/4 − α) s) t + s, 0 ≤ s ≤ 1
−t/2 + s,
s ≥ 1
10
x =
For α = 1/6, we have
x =
2
3
(cid:0)
For α = 1/3, we have
s ≤ 0
2 t + s,
3
7
− s t + s, 0 ≤ s ≤ 1
6
− 1 t + s,
(cid:1)
s ≥ 1
2
s ≤ 0
1 t + s,
3
1
5 s t + s, 0 ≤ s ≤ 1
−3
6
− 1 t + s,
(cid:1)
s ≥ 1
2
x =
(cid:0)
Again, for fixed times t = t0, plotting the solution amounts to choosing an appropriate
range of values for | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
fixed times t = t0, plotting the solution amounts to choosing an appropriate
range of values for s, in this case −2 ≤ s ≤ 2 would suffice, and then plotting the
resulting points u (t0, s) versus x (t0, s) in the xu-plane.
The transformation (r, s) → (x, t) is non-invertible if the determinant of the Ja
cobian matrix is zero,
∂ (x, t)
∂ (r, s)
= det
xr xs = det
tr
!
ts
1 − 2f (s) −2f ′ (s) r + 1
1
0
!
= 2f ′ (s) r − 1 = 0.
(14)
Solving for r and noting that t = r gives the time when the determinant becomes
zero,
t = r =
.
(15)
1
2f ′ (s)
Since times in this problem are positive t > 0, then shocks occur if f ′ (s) > 0 for some
s. The first such time where shocks occur is
tshock =
1
2 max {f ′ (s)}
.
(16)
In the example above, the time when a shock first occurs is given by substituting
(13) into (16),
tshock =
1
2 max {f ′ (s)}
=
2
1
− α
3
4
.
Thus, lighter traffic (smaller α) leads to shocks sooner! The position of the shock at
tshock is given by
(cid:1)
(cid:0)
1 − α
2
xshock = (1 − 2α) tshock = 3 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
)
(cid:0)
1 − α
2
xshock = (1 − 2α) tshock = 3
− α
4
.
11 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 2.4
Slide 2.4.1
So far, we have looked at three any-path algorithms, depth-first and breadth-first, which are
uninformed, and best-first, which is heuristically guided.
Slide 2.4.2
Now, we will look at the first algorithm that searches for optimal paths, as defined by a "path
length" measure. This uniform cost algorithm is uninformed about the goal, that is, it does not use
any heuristic guidance.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.3
This is the simple algorithm we have been using to illustrate the various searches. As before, we will
see that the key issues are picking paths from Q and adding extended paths back in.
Slide 2.4.4
We will continue to use the algorithm but (as we will see) the use of the Visited list conflicts with
optimal searching, so we will leave it out for now and replace it with something else later.
Slide 2.4.5
Why can't we use a Visited list in connection with optimal searching? In the earlier searches, the use
of the Visited list guaranteed that we would not do extra work by re-visiting or re-expanding states.
It did not cause any failures then (except possibly of intuition).
Slide 2.4.6
But, using the Visited list can cause an optimal search to overlook the best path. A simple example
will illustrate this.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.7
Clearly, the shortest path (as determined by sum of link costs) to G is (S A D G) and an optimal
search had better find it.
Slide 2.4.8
However, on expanding S, A and D are Visited, which means that the extension from A to D would
never be generated and we would miss the best path. So, we can't use a Visited list; nevertheless, we
still have the problem of multiple paths to a state leading to wasted work. We will deal with that
issue later, since it can get a bit complicated. So, first, we will | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
paths to a state leading to wasted work. We will deal with that
issue later, since it can get a bit complicated. So, first, we will focus on the basic operation of
optimal searches.
Slide 2.4.9
The first, and most basic, algorithm for optimal searching is called uniform-cost search. Uniform-
cost is almost identical in implementation to best-first search. That is, we always pick the best node
on Q to expand. The only, but crucial, difference is that instead of assigning the node value based on
the heuristic value of the node's state, we will assign the node value as the "path length" or "path
cost", a measure obtained by adding the "length" or "cost" of the links making up the path.
Slide 2.4.10
To reiterate, uniform-cost search uses the total length (or cost) of a path to decide which one to
expand. Since we generally want the least-cost path, we will pick the node with the smallest path
cost/length. By the way, we will often use the word "length" when talking about these types of
searches, which makes intuitive sense when we talk about the pictures of graphs. However, we mean
any cost measure (like length) that is positive and greater than 0 for the link between any two states.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.11
The path length is the SUM of the length associated with the links in the path. For example, the path
from S to A to C has total length 4, since it includes two links, each with edge 2.
Slide 2.4.12
The path from S to B to D to G has length 8 since it includes links of length 5 (S-B), 1 (B-D) and 2
(D-G).
Slide 2.4.13
Similarly for S-A-D-C.
Slide 2.4.14
Given this, let's simulate the behavior of uniform-cost search on this simple directed graph. As usual
we start with a single node containing just the start state S. This path has zero length. Of course, we
choose this path for expansion.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.15 | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
path for expansion.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.15
This generates two new entries on Q; the path to A has length 2 and the one to B has length 5. So,
we pick the path to A to expand.
Slide 2.4.16
This generates two new entries on the queue. The new path to C is the shortest path on Q, so we
pick it to expand.
Slide 2.4.17
Since C has no descendants, we add no new paths to Q and we pick the best of the remaining paths,
which is now the path to B.
Slide 2.4.18
The path to B is extended to D and G and the path to D from B is tied with the path to D from A.
We are using order in Q to settle ties and so we pick the path from B to expand. Note that at this
point G has been visited but not expanded.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.19
Expanding D adds paths to C and G. Now the earlier path to D from A is the best pending path and
we choose it to expand.
Slide 2.4.20
This adds a new path to G and a new path to C. The new path to G is the best on the Q (at least tied
for best) so we pull it off Q.
Slide 2.4.21
And we have found our shortest path (S A D G) whose length is 8.
Slide 2.4.22
Note that once again we are not stopping on first visiting (placing on Q) the goal. We stop when the
goal gets expanded (pulled off Q).
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.23
In uniform-cost search, it is imperative that we only stop when G is expanded and not just when it is
visited. Until a path is first expanded, we do not know for a fact that we have found the shortest path
to the state.
Slide 2.4.24
In the any-path searches we chose to do the same thing, but that choice was motivated at the time
simply by consistency | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
2.4.24
In the any-path searches we chose to do the same thing, but that choice was motivated at the time
simply by consistency with what we HAVE to do now. In the earlier searches, we could have
chosen to stop when visiting a goal state and everything would still work fine (actually better).
Slide 2.4.25
Note that the first path that visited G was not the eventually chosen optimal path to G. This explains
our unwillingness to stop on first visiting G in the example we just did.
Slide 2.4.26
It is very important to drive home the fact that what uniform-cost search is doing (if we focus on the
sequence of expanded paths) is enumerating the paths in the search tree in order of their path cost.
The green numbers next to the tree on the left are the total path cost of the path to that state. Since,
in a tree, there is a unique path from the root to any node, we can simply label each node by the
length of that path.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.27
So, for example, the trivial path from S to S is the shortest path.
Slide 2.4.29
Then the path from S to A to C, with length 4, is the next shortest path.
Slide 2.4.28
Then the path from S to A, with length 2, is the next shortest path.
Slide 2.4.30
Then comes the path from S to B, with length 5.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.31
Followed by the path from S to A to D, with length 6.
Slide 2.4.32
And the path from S to B to D, also with length 6.
Slide 2.4.33
And, finally the path from S to A to D to G with length 8. The other path (S B D G) also has length
8.
Slide 2.4.34
This gives us the path we found. Note that the sequence of expansion corresponds precisely to path-
length order, so it is not surprising we find the shortest path.
6.034 Artificial Intelligence. Copyright © | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
sequence of expansion corresponds precisely to path-
length order, so it is not surprising we find the shortest path.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 2.5
Slide 2.5.1
Now, we will turn our attention to what is probably the most popular search algorithm in AI, the
A* algorithm. A* is an informed, optimal search algorithm. We will spend quite a bit of time
going over A*; we will start by contrasting it with uniform-cost search.
Slide 2.5.2
Uniform-cost search as described so far is concerned only with expanding short paths; it pays no
particular attention to the goal (since it has no way of knowing where it is). UC is really an
algorithm for finding the shortest paths to all states in a graph rather than being focused in reaching
a particular goal.
Slide 2.5.3
We can bias UC to find the shortest path to the goal that we are interested in by using a heuristic
estimate of remaining distance to the goal. This, of course, cannot be the exact path distance (if we
knew that we would not need much of a search); instead, it is a stand-in for the actual distance that
can give us some guidance.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.4
What we can do is to enumerate the paths by order of the SUM of the actual path length and the
estimate of the remaining distance. Think of this as our best estimate of the TOTAL distance to the
goal. This makes more sense if we want to generate a path to the goal preferentially to short paths
away from the goal.
Slide 2.5.5
We call an estimate that always underestimates the remaining distance from any node an
admissible (heuristic) estimate.
Slide 2.5.6
In order to preserve the guarantee that we will find the shortest path by expanding the partial paths
based on the estimated total path length to the goal (like in UC without an expanded list), we must
ensure that our heuristic estimate is admissible. Note that straight-line distance is always an
underestimate of path-length in Euclidean space. Of course, by our constraint on distances, the
constant function 0 is always admissible (but useless). | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
estimate of path-length in Euclidean space. Of course, by our constraint on distances, the
constant function 0 is always admissible (but useless).
Slide 2.5.7
UC using an admissible heuristic is known as A* (A star). It is a very popular search method in AI.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.8
Let's look at a quick example of the straight-line distance underestimate for path length in a graph.
Consider the following simple graph, which we are assuming is embedded in Euclidean space, that
is, think of the states as city locations and the length of the links are proportional to the driving
distance between the cities along the best roads.
Slide 2.5.9
Then, we can use the straight-line (airline) distances (shown in red) as an underestimate of the actual
driving distance between any city and the goal. The best possible driving distance between two
cities cannot be better than the straight-line distance. But, it can be much worse.
Slide 2.5.10
Here we see that the straight-line estimate between B and G is very bad. The actual driving distance
is much longer than the straight-line underestimate. Imagine that B and G are on different sides of
the Grand Canyon, for example.
Slide 2.5.11
It may help to understand why an underestimate of remaining distance may help reach the goal
faster to visualize the behavior of UC in a simple example.
Imagine that the states in a graph represent points in a plane and the connectivity is to nearest
neighbors. In this case, UC will expand nodes in order of distance from the start point. That is, as
time goes by, the expanded points will be located within expanding circular contours centered on the
start point. Note, however, that points heading away from the goal will be treated just the same as
points that are heading towards the goal.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.12
If we add in an estimate of the straight-line distance to the goal, the points expanded will be
bounded contours that keep constant the sum of the distance from the start and the distance to the
goal, as suggested in the figure. What the underestimate has done is to "bias" the search towards the | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
distance from the start and the distance to the
goal, as suggested in the figure. What the underestimate has done is to "bias" the search towards the
goal.
Slide 2.5.13
Let's walk through an example of A*, that is, uniform-cost search using a heuristic which is an
underestimate of remaining cost to the goal. In this example we are focusing on the use of the
underestimate. The heuristic we will be using is similar to the earlier one but slightly modified to be
admissible.
We start at S as usual.
Slide 2.5.14
And expand to A and B. Note that we are using the path length + underestimate and so the S-A path
has a value of 4 (length 2, estimate 2). The S-B path has a value of 8 (5 + 3). We pick the path to A.
Slide 2.5.15
Expand to C and D and pick the path with shorter estimate, to C.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.16
C has no descendants, so we pick the shorter path (to D).
Slide 2.5.17
Then a path to the goal has the best value. However, there is another path that is tied, the S-B path.
It is possible that this path could be extended to the goal with a total length of 8 and we may prefer
that path (since it has fewer states). We have assumed here that we will ignore that possibility, in
some other circumstances that may not be appropriate.
Slide 2.5.18
So, we stop with a path to the goal of length 8.
Slide 2.5.19
It is important to realize that not all heuristics are admissible. In fact, the rather arbitrary heuristic
values we used in our best-first example are not admissible given the path lengths we later assigned.
In particular, the value for D is bigger than its distance to the goal and so this set of distances is not
everywhere an underestimate of distance to the goal from every node. Note that the (arbitrary) value
assigned for S is also an overestimate but this value would have no ill effect since at the time S is
expanded there are no alternatives.
6.034 Artificial Intelligence. Copyright © | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
but this value would have no ill effect since at the time S is
expanded there are no alternatives.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.20
Although it is easy and intuitive to illustrate the concept of a heuristic by using the notion of straight-
line distance to the goal in Euclidean space, it is important to remember that this is by no means the
only example.
Take solving the so-called 8-puzzle, in which the goal is to arrange the pieces as in the goal state on
the right. We can think of a move in this game as sliding the "empty" space to one of its nearest
vertical or horizontal neighbors. We can help steer a search to find a short sequence of moves by
using a heuristic estimate of the moves remaining to the goal.
One admissible estimate is simply the number of misplaced tiles. No move can get more than one
misplaced tile into place, so this measure is a guaranteed underestimate and hence admissible.
Slide 2.5.21
We can do better if we note that, in fact, each move can at best decrease by one the
"Manhattan" (aka Taxicab, aka rectilinear) distance of a tile from its goal.
So, the sum of these distances for each misplaced tile is also an underestimate. Note that it is always
a better (larger) underestimate than the number of misplaced tiles. In this example, there are 7
misplaced tiles (all except tile 2), but the Manhattan distance estimate is 17 (4 for tile 1, 0 for tile 2,
2 for tile 3, 3 for tile 4, 1 for tile 5, 3 for tile 6, 1 for tile 7 and 3 for tile 8). | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
18.417 Introduction to Computational Molecular Biology
Lecture 13: October 21, 2004
Lecturer: Ross Lippert
Scribe: Eitan Reich
Editor: Peter Lee
13.1
Introduction
We have been looking at algorithms to find optimal scoring alignments of a query text
Q with a database T . While these algorithms have reasonable asymptotic runtimes,
namely O(|Q||T |), they are impractical for larger databases such as genomes. To
improve the runtime of our algorithms, we will sacrifice the optimality requirement
and instead use smart heuristics and statistical significance data to find “close to
optimal” alignments and quantify how significant such alignments are.
13.2
Inexact Matching
The inexact matching problem is to find good alignments while allowing for limited
discrepancies in the form of substitutions, insertions and deletions. The problem can
be stated in the following two ways, which are slight variations of each other:
Definition 1 (k-mismatch l-word problem) Given distance limit k, word length
l, query string q, and text string t, return all pairs of integers (i, j) such that
d(qi · · · qi+l , tj · · · tj+l) <= k, where d is the Hamming distance function.
Definition 2 (S-scoring l-word problem) Given score requirement S, scoring
function �, word length l, query string q, text string t, return all pairs of integers
(i, j) such that �(qi · · · qi+l, tj · · · tj+l) >= S where �(x, y) := �i �(xi, yi).
13.3 P | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
tj · · · tj+l) >= S where �(x, y) := �i �(xi, yi).
13.3 Pigeonhole Principle
A key insight in the inexact matching problem is that wherever there is a good
approximate alignment, there will be smaller exact alignments. For example, if there
13-1
13-2
Lecture 13: October 21, 2004
is an alignment of 9 bases with at most 1 mismatch, there must be an exact alignment
of at least 4 bases. The pigeonhole principle is used to generalize this idea and quantify
exactly what type of exact alignments we can be guaranteed to find, given the type
of inexact alignment. To locate good approximate matches, we instead look for the
Figure 13.1: Good matches must contain exact matches.
exact matches that we would expect to find in longer inexact alignments. These exact
matches can then be extended to produce longer matches that are may no longer be
exact, but may still be within the given discrepancy threshold. More specifically, we
can locate exact matches and extend them, using the k-mismatch algorithm:
1. We know that wherever there is a k-mismatch l-word, there is at least an exact
match of length s where s = �l/(k + 1)�.
2. We can look for potential alignment locations by finding all s-words that match
exactly between the query string and the text.
3. These s-words can then be extended to the left and the right to find an l-word
within k mismatches. To do this extension correctly takes O(l2) time, although
many methods dont actually achieve this run-time.
or using the using the S-scoring algorithm as follows for the k-mismatch problem or
the s-scoring problem:
1. We know that wherever there is an S-scoring l-word match, there must be some
s-word match with score threshold T (from exact matching)
2. To locate potential high scoring locations, we form | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
, there must be some
s-word match with score threshold T (from exact matching)
2. To locate potential high scoring locations, we form T -scoring neighborhoods of
the s-words in the query text.
3. All neighborhood words in T are then found by exact matching.
Lecture 13: October 21, 2004
13-3
4. Each of these s-words can then be extended to the left and right to find an
l-word with score at least S.
To extend exact matching s-words to the left and right to produce approximately
matching l-words, we can use either ungapped extension, or gapped extension.
Using ungapped extension, the exact matches are extended to the left and right
without allowing any insertions or deletions. If two bases do not match, there is
simply a mismatch that counts towards the k-mismatch limit or will penalize the score
in the S-scoring problem. Using gapped extension, the exact matches are extended
Figure 13.2: Ungapped extension.
to the left and right allowing mismatches, insertions and deletions. While the exact
matching s-word that we begin with has no insertions or deletions, the approximate
match that is produced by extension allows for such imperfections which will count
toward the mismatch limit or penalize the score in the S-scoring problem:
13.4 BLAST
BLAST, or Basic Local Alignment Search Tool, is the successor to two simpler tools:
FASTA, a nucleotide alignment tool, and FASTP, a protein sequence alignment tool.
13-4
Lecture 13: October 21, 2004
Figure 13.3: Gapped extension.
Like its predecessors, BLAST works by starting with a seed, and then, using a ex
tension heuristic, � | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
AST works by starting with a seed, and then, using a ex
tension heuristic, finds approximate matches from shorter exact matches. What is so
innovative about BLAST, however, is its incorporation of statistical measures along
with alignment results to tell how statistically significant an alignment is. In addi
tion, given a query and text string of any length, BLAST can find maximal scoring
pairs (MSPs) very efficiently. While BLAST originally just returned MSPs, it now
also returns alignments after extension.
Fact 1 (Altschul-Karlin statistical result) If our query string has length n and
our text has length m, then the expected number of MSPs with score greater than or
equal to S is:
E(S) = Kmne −�S
where K is a constant and � is a normalizing factor that is a positive root of the
following equation:
pxpy e ��(x,y) = 1
x,y��
where px is the frequency of character x from our alphabet � and � is our scoring
function.
Fact 2 (Chen-Stein statistical result) The number of MSPs forms a Poisson dis
tribution with average value E(S).
Lecture 13: October 21, 2004
13-5
13.4.1 Variations
• BLASTn: nucleotide to nucleotide database alignment
• BLASTp: protein to protein database alignment
• BLASTx: translated nucleotide to protein database alignment
• tBLASTn: protein to translated nucleotide database alignment
• tBLASTx: translated nucleotide to translated nucleotide database | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
• tBLASTn: protein to translated nucleotide database alignment
• tBLASTx: translated nucleotide to translated nucleotide database alignment
13.5 Filtration
The idea behind filtration is to find seeds that will locate every MSP.
Given a proposed alignment, we can produce a binary string where there is a 1
whenever the two strings match and a 0 if they do not. This binary string gives us
the matching rate of the alignment. An MSP is an alignment whose associated binary
string has a high proportion of 1s. For example, here is an MSP of length 20 where
more than 70% of the bases are aligned as matches:
Q a a t c t t g c g a g a c c a a t g g c a c t t
T c t t c c t g c g g g a c c t a t c c c a c a a
= 0 0 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 0 0
If to locate MSPs, we look for an alignment that would produce an exact match of 4
bases (as we might have derived using the pigeonhole principle), we would be using
the seed 1111, which corresponds to finding a location in the binary string where
there are 4 consecutive 1s. We can see that if we choose our seed to be too small (for
example choosing 11 as our seed), we would hit too many locations in the string to
be a statistically significant alignment, since many short exact matches can occur by
chance and have nothing to do with the existence of an MSP. On the other hand, if
we choose our seed to be too long, we will | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
existence of an MSP. On the other hand, if
we choose our seed to be too long, we will miss many MSPs because there may still
be slight mismatches in an MSP that would cause the seed to reject that location.
A creative idea to allow the seed to account for slight mismatches, but at the same
time not pick up too many alignments that are occurring merely due to chance is to
used gap seeds.
13-6
Lecture 13: October 21, 2004
Instead of looking for 1111, we can look for 11011. We can see the effectiveness of
the gapped seed as opposed to the consecutively spaced seed in the example because
the consecutively spaced seed misses a lot of locations in the MSP and almost fails to
hit it altogether. With the gapped seed however, a small amount of error is allowed
by introducing a dont care bit, and there are now many more hits on the MSP
to ensure that it isn’t missed. The idea of using gapped seeds rather than simple
consecutively spaced seeds increases the effectiveness of MSP search methods by not
allowing random noise to produce too many hits while at the same time ensuring that
MSPs are hit. | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
6.895 Theory of Parallel Systems
Lecture 2
Cilk, Matrix Multiplication, and Sorting
Lecturer: Charles Leiserson
Lecture Summary
1. Parallel Processing With Cilk
This section provides a brief introduction to the Cilk language and how Cilk schedules and executes
parallel processes.
2. Parallel Matrix Multiplication
This section shows how to multiply matrices efficiently in parallel.
3. Parallel Sorting
This section describes a parallel implementation of the merge sort algorithm.
1 Parallel Processing With Cilk
We need a systems background to implement and test our parallel systems theories. This section gives
an introduction to the Cilk parallel-programming language. It then gives some background on how Cilk
schedules parallel processes and examines the role of race conditions in Cilk.
1.1 Cilk
This section introduces Cilk. Cilk is a version of C that runs in a parallel-processing environment. It uses
the same syntax as C with the addition of the keywords spawn, sync, and cilk. For example, the Fibonacci
function written in Cilk looks like this:
cilk int fib(int n)
{
if(n < 2) return n;
else
{
int x, y;
x = spawn fib(n-1);
y = spawn fib(n-2);
sync;
return(x + y);
}
}
Cilk is a faithful extension of C, in that if the Cilk keywords are elided from a Cilk program, the result is
a C program which implements the Cilk semantics.
A function preceded by cilk is defined as a Cilk function. For example,
cilk int fib(int n);
defines a Cilk function called fib. Functions defined without the cilk keyword are typical C | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
Cilk function called fib. Functions defined without the cilk keyword are typical C functions.
A function call preceded by the spawn keyword tells the Cilk compiler that the function call can be made
2-1
push
stack
frame
push
stack
frame
steal
procedure
stack
pop
procedure
stack
pop
(a)
(b)
Figure 1: Call stack of an executing process. Boxes represent stack frames.
asynchronously in a concurrent thread. The sync keyword forces the current thread to wait for asynchronous
function calls made from the current context to complete.
Cilk keywords introduce several idiosyncracies into the C syntax. A Cilk function cannot be called with
normal C calling conventions – it must be called with spawn and waited for with sync. The spawn keyword
can only be applied to a Cilk function. The spawn keyword cannot occur within the context of a C function.
Refer to the Cilk manual for more details.
1.2 Parallel-Execution Model
This section examines how Cilk runs processes in parallel. It introduces the concepts of work-sharing and
work-stealing and then outlines Cilk’s implementation of the work-stealing algorithm.
Cilk processes are scheduled using an online greedy scheduler. The performance bounds of the online
scheduler are close to the optimal offline scheduler. We will look at provable bounds on the performance of
the online scheduler later in the term.
Cilk schedules processes using the principle of work-stealing rather than work-sharing. Work-sharing is
where a thread is scheduled to run in parallel whenever the runtime makes an asynchronous function call.
Work-stealing, in contrast, is where a processor looks around for work whenever it becomes idle.
To better explain how Cilk implements work-stealing, let us first examine the call stack of a vanilla C
program running | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
work-stealing, let us first examine the call stack of a vanilla C
program running on a single processor. In figure 1, the stack grows downward. Each stack frame contains
local variables for a function. When a function makes a function call, a new stack frame is pushed onto the
stack (added to the bottom) and when a function returns, it’s stack frame is popped from the stack. The
call stack maintains synchronization between procedures and functions that are called.
In the work-sharing scheme, when a function is spawned, the scheduler runs the spawned thread in
parallel with the current thread. This has the benefit of maximizing parallelism. Unfortunately, the cost of
setting up new threads is high and should be avoided.
Work-stealing, on the other hand, only branches execution into parallel threads when a processor is
idle. This has the benefit of executing with precisely the amount of parallelism that the hardware can take
advantage of. It minimizes the number of new threads that must be setup. Work-stealing is the lazy way to
put off work for parallel execution until parallelism actually occurs. It has the benefit of running with the
same efficiency as a serial program in a uniprocessor environment.
Another way to view the distinction between work-stealing and work-sharing is in terms of how the
scheduler walks the computation graph. Work-sharing branches as soon and as often as possible, walking
the computation graph with a breadth-first search. Work-stealing only branches when necessary, walking
the graph with a depth-first search.
Cilk’s implementation of work-stealing avoids running threads that are likely to share variables by schedul-
ing threads to run from | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
first search.
Cilk’s implementation of work-stealing avoids running threads that are likely to share variables by schedul-
ing threads to run from the other end of the call stack. When a processor is idle, it chooses a random processor
2-2
Call tree:
A
Views
of stack:
A
A
B
C
D
E
F
B
A
B
C
A
C
D
A
B
D
E
A
B
E
F
A
C
F
Figure 2: Example of a call stack shown as a cactus stack, and the views of the stack as seen by each procedure.
Boxes represent stack frames.
and finds the sleeping stack frame that is closest to the base of that processor’s stack and executes it. This
way, Cilk always parallelizes code execution at the oldest possible code branch.
1.3 Cactus Stack
Cilk uses a cactus stack to implement C’s rule for sharing of function-local variables. A cactus stack is
a parallel stack implemented as a tree. A push maps to following a branch to a child and a pop maps to
returning to the parent in a tree. For example, the cactus tree in figure 2 represents the call stack constructed
by a call to A in the following code:
void A(void)
{
B();
C();
}
void B(void)
{
D();
E();
}
void C(void)
{
F();
}
void D(void) {}
void E(void) {}
void F(void) {}
Cilk has the same rules for pointers as C. Pointers to local variables can be passed downwards in the
call stack. Pointers can be passed upward only if they reference data stored on the heap (allocated with
malloc). In other words, a stack frame can only see data stored | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
reference data stored on the heap (allocated with
malloc). In other words, a stack frame can only see data stored in the current and in previous stack frames.
Functions cannot return references to local variables.
The complete call tree is shown in figure 2. Each procedure sees a different view of the call stack based
on how it is called. For example, B sees a call stack of A followed by B, D sees a call stack of A followed by
B followed by D and so on. When procedures are run in parallel by Cilk, the running threads operate on
2-3
their view of the call stack. The stack maintained by each process is a reference to the actual call stack, not
a copy of it. Cilk maintains coherence among call stacks that contain the same frames using methods that
we will discuss later.
1.4 Race Conditions
The single most prominent reason that parallel computing is not widely deployed today is because of race
conditions. Identifying and debugging race conditions in parallel code is hard. Once a race condition has been
found, no methodology currents exists to write a regression test to ensure that the bug is not reintroduced
during future development. For these reasons, people do not write and deploy parallel code unless they
absolutely must. This section examines an example race condition in Cilk.
Consider the following code:
cilk int foo(void)
{
int x = 0;
spawn bar(&x);
spawn bar(&x);
sync;
return x;
}
}
cilk void bar(int *p)
{
*p += 1;
If this were a serial code, we would expect that foo returns 2. What value is returned by foo in the parallel
case? Assume the increment performed by bar is implemented with assembly that looks like this:
read x
add
write x
Then, the parallel execution looks like the following:
bar 1:
read x (1)
add
write x | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
x
Then, the parallel execution looks like the following:
bar 1:
read x (1)
add
write x (2)
bar 2:
read x (3)
add
write x (4)
where bar 1 and bar 2 run concurrently. On a single processor, the steps are executed (1) (2) (3) (4) and
foo returns 2 as expected. In the parallel case, however, the execution could occur in the order (1) (3) (2)
(4), in which case foo would return 1. The simple code exhibits a race condition. Cilk has a tool called the
Nondeterminator which can be used to help check for race conditions.
2-4
2 Matrix Multiplication and Merge Sort
In this section we explore multithreaded algorithms for matrix multiplication and array sorting. We also
analyze the work and critical path-lengths. From these measures, we can compute the parallelism of the
algorithms.
2.1 Matrix Multiplication
To multiply two n × n matrices in parallel, we use a recursive algorithm. This algorithm uses the following
formulation, where matrix A multiplies matrix B to produce a matrix C:
�
�
C11 C12
C21 C22
=
=
�
�
�
� �
·
B11 B12
A11 A12
A21 A22
B21 B22
A11B11 + A12B21 A11B12 + A12B22
A21B11 + A22B21 A21B12 + A22B22
�
.
This formulation expresses an n × n matrix multiplication as 8 multiplications and 4 additions of (n/2) ×
(n/2) submatrices. The multithreaded algorithm Mult performs the above computation when n is a power
of 2. Mult uses | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
) submatrices. The multithreaded algorithm Mult performs the above computation when n is a power
of 2. Mult uses the subroutine Add to add two n × n matrices.
Mult(C, A, B, n)
if n = 1
then C[1, 1] ← A[1, 1] · B[1, 1]
else
allocate a temporary matrix T [1 . . n,1 . . n]
partition A, B, C and T into (n/2) × (n/2) submatrices
spawn Mult(C11, A11, B11, n/2)
spawn Mult(C12, A11, B12, n/2)
spawn Mult(C21, A21, B11, n/2)
spawn Mult(C22, A21, B12, n/2)
spawn Mult(T11, A12, B21, n/2)
spawn Mult(T12, A12, B22, n/2)
spawn Mult(T21, A22, B21, n/2)
spawn Mult(T22, A22, B22, n/2)
sync
spawn Add(C, T, n)
sync
Add(C, T, n)
if n = 1
then C[1, 1] ← C[1, 1] + T [1, 1]
else partition C and T into (n/2) × (n/2) submatrices
spawn Add(C11, T11, n/2)
spawn Add(C12, T12, n/2)
spawn Add(C21, T21, n/2)
spawn Add(C22, T22, n/2)
sync
The analysis of the algorithms in this section requires the use of the Master Theorem. We state the
Master Theorem here for convenience.
2-5
spawn
comp
start
15 us
30 us
40 us
10 us
Figure 3: Critical path. The squiggles represent two different code paths. The circle is another code path.
Theorem 1 (Master Theorem) Let a ≥ | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
�erent code paths. The circle is another code path.
Theorem 1 (Master Theorem) Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be
defined on the nonnegative integers by the recurrence
T (n) = aT (n/b) + f (n),
where we interpret n/b to mean either (cid:3)n/b(cid:4) or (cid:5)n/b(cid:6). Then T (n) can be bounded asymptotically as follows.
1. If f (n) = O(nlogb a−(cid:1)) for some constant (cid:1) > 0, then T (n) = Θ(nlogba).
2. If f (n) = Θ(nlogbalgkn) for some constant k ≥ 0, then T (n) = Θ(nlogbalgk+1n).
3. If f (n) = Ω(nlogba+(cid:1)) for some constant (cid:1) > 0, and if af (n/b) ≤ cf (n) for some constant c < 1 and
all sufficiently large n, then T (n) = Θ(f (n)).
We begin by analyzing the work for Mult. The work is the running time of the algorithm on one
processor, which we compute by solving the recurrence relation for the serial equivalent of the algorithm.
We note that the matrix partitioning in Mult and Add takes O(1) time, as it requires only a constant
number of indexing operations. For the subroutine Add, the work at the top level (denoted A1(n)) then
consists of the work of 4 problems of size n/2 plus a constant factor, which is | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
(denoted A1(n)) then
consists of the work of 4 problems of size n/2 plus a constant factor, which is expressed by the recurrence
A1(n) = 4A1(n/2) + Θ(1)
= Θ(n 2).
(1)
(2)
We solve this recurrence by invoking case 1 of the Master Theorem. Similarly, the recurrence for the work
of Mult (denoted M1(n)):
M1(n) = 8M1(n/2) + Θ(n 2)
= Θ(n 3).
(3)
(4)
We also solve this recurrence with case 1 of the Master Theorem. The work is the same as for the traditional
triply-nested-loop serial algorithm.
The critical-path length is the maximum path length through a computation, as illustrated by figure
3. For Add, all subproblems have the same critical-path length, and all are executed in parallel. The
critical-path length (denoted A∞(n)) is a constant plus the critical-path length of one subproblem, and is
represented by the recurrence (sovled by case 2 of the Master Theorem):
Using this result, the critical-path length for Mult (denoted M∞(n)) is
A∞ = A∞(n/2) + Θ(1)
= Θ(lg n).
M∞ = M∞(n/2) + Θ(lg n)
= Θ(lg2 n),
2-6
(5)
(6)
(7)
(8)
by case 2 of the Master Theorem. From the work and critical-path length, we compute the parallelism:
M1(n)/M∞(n) = Θ(n / lg2 n).
3
(9)
As an example, if n = 1000, the parallelism ≈ 107 . In practice, multiprocessor systems don’t | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
9)
As an example, if n = 1000, the parallelism ≈ 107 . In practice, multiprocessor systems don’t have more that
≈ 64,000 processors, so the algorithm has more than adequate parallelism.
In fact, it is possible to trade parallelism for an algorithm that runs faster in practice. Mult may
run slower than an in-place algorithm because of the hierarchical structure of memory. We introduce a
new algorithm, Mult-Add, that trades parallelism in exchange for eliminating the need for the temporary
matrix T .
Mult-Add(C, A, B, n)
if n = 1
then C[1, 1] ← C[1, 1] + A[1, 1] · B[1, 1]
else partition A, B, and C into (n/2) × (n/2) submatrices
spawn Mult(C11, A11, B11, n/2)
spawn Mult(C12, A11, B12, n/2)
spawn Mult(C21, A21, B11, n/2)
spawn Mult(C22, A21, B12, n/2)
sync
spawn Mult(C11, A12, B21, n/2)
spawn Mult(C12, A12, B22n/2)
spawn Mult(C21, A22, B21, n/2)
spawn Mult(C22, A22, B22, n/2)
sync
The work for Mult-Add (denoted M1
(cid:2) (n) = Θ(n3). Since
the algorithm now executes four recursive calls in parallel followed in series by another four recursive calls
in parallel, the critical-path length (denoted M (cid:2)
(cid:2) (n)) is the same as the work for Mult, M1
∞(n)) is
M (cid:2)
∞(n) = 2M (cid:2)
= Θ(n)
∞(n/2) | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
:2)
∞(n) = 2M (cid:2)
= Θ(n)
∞(n/2) + Θ(1)
by case 1 of the Master Theorem. The parallelism is now
M (cid:2)
1(n)/M (cid:2)
∞ = Θ(n 2).
(10)
(11)
(12)
When n = 1000, the parallelism ≈ 106, which is still quite high.
The naive algorithm (M (cid:2)(cid:2)) that computes n2 dot-products in parallel yields the following theoretical
results:
M (cid:2)(cid:2)
1 (n) = Θ(n 3)
M (cid:2)(cid:2)
∞ = Θ(lg n)
=> P arallelism = Θ(n / lg n).
3
(13)
(14)
(15)
Although it does not use temporary storage, it is slower in practice due to less memory locality.
2.2 Sorting
In this section, we consider a parallel algorithm for sorting an array. We start by parallelizing the code for
Merge-Sort while using the traditional linear time algorithm Merge to merge the two sorted subarrays.
2-7
Merge-Sort(A, p, r)
if p < r
then q ← (cid:3)(p + r)/2(cid:4)
spawn Merge-Sort(A, p, q)
spawn Merge-Sort(A, q + 1, r)
sync
Merge(A, p, q, r)
Since the running time of Merge is Θ(n), the work (denoted T1(n)) for Merge-Sort is
T1(n) = 2T1(n/2) + Θ(n)
= Θ(n lg n)
(16)
(17)
by case 2 of the Master Theorem. The critical-path length (denoted T∞(n)) is the critical-path length of
one of the two recursive spawns plus that of Merge | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
denoted T∞(n)) is the critical-path length of
one of the two recursive spawns plus that of Merge:
T∞(n) = T∞(n/2) + Θ(n)
= Θ(n)
(18)
(19)
by case 3 of the Master Theorem. The parallelism, T1(n)/T∞ = Θ(lg n), is not scalable. The bottleneck is
the linear-time Merge. We achieve better parallelism by designing a parallel version of Merge.
P-Merge(A[1..l], B[1..m], C[1..n])
if m > l
then spawn P-Merge(B[1 . . m], A[1 . . l], C[1 . . n])
elseif n = 1
then C[1] ← A[1]
elseif l = 1
then if A[1] ≤ B[1]
then C[1] ← A[1]; C[2] ← B[1]
else C[1] ← B[1]; C[2] ← A[1]
else find j such that B[j] ≤ A[l/2] ≤ B[j + 1] using binary search
spawn P-Merge(A[1..(l/2)], B[1..j], C[1..(l/2 + j)])
spawn P-Merge(A[(l/2 + 1)..l], B[(j + 1)..m], C[(l/2 + j + 1)..n])
sync
P-Merge puts the elements of arrays A and B into array C in sequential order, where n = l + m. The
algorithm finds the median of the larger array and uses it to partition the smaller array. Then, it recursively
merges the lower portions and the upper portions of the arrays. The operation of the algorithm is illustrated
in figure 4.
We begin by analyzing the critical-path length of P-Merge. The critical-path length is equal to the
maximum critical-path length of the two spawned | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
erge. The critical-path length is equal to the
maximum critical-path length of the two spawned subproblems plus the work of the binary search. The
binary search completes in Θ(lg m) time, which is Θ(lg n) in the worst case. For the subproblems, half of A
is merged with all of B in the worst case. Since l ≥ n/2, at least n/4 elements are merged in the smaller
subproblem. That leaves at most 3n/4 elements to be merged in the larger subproblem. Therefore, the
critical-path is
T∞(n) ≤ T (3/4n) + O(lg n)
= O(lg2 n)
2-8
1
l/2
l
A
< A[
l/2]
> A[
l
/2]
1
j
j + 1
m
B
< A[
l/2]
l
> A[
/2]
Figure 4: Find where middle element of A goes into B. The boxes represent arrays.
by case 2 of the Master Theorem. To analyze the work of P-Merge, we set up a recurrence by using the
observation that each subproblem operates on αn elements, where 1/4 ≤ α ≤ 3/4. Thus, the work satisfies
the recurrence
T1(n) = T (αn) + T ((1 − α)n) + O(lg n).
We shall show that T1(n) = Θ(n) by using the substitution method. We take T (n) ≤ an − b lg n as our
inductive assumption, for constants a, b > 0. We have
T(n) ≤ aαn − b lg(αn) + a(1 − α | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
, b > 0. We have
T(n) ≤ aαn − b lg(αn) + a(1 − α)n − b lg((1 − α)n) + Θ(lg n)
= an − b(lg(αn) + lg((1 − α)n)) + Θ(lg n)
= an − b(lg α + lg n + lg(1 − α) + lg n) + Θ(lg n)
= an − b lg n − (b(lg n + lg(α(1 − α))) − Θ(lg n))
≤ an − b lg n,
since we can choose b large enough so that b(lg n + lg(α(1 − α))) dominates Θ(lg n). We can also pick a large
enough to satisfy the base conditions. Thus, T(n) = Θ(n), which is the same as the work for the ordinary
Merge. Reanalyzing the Merge-Sort algorithm, with P-Merge replacing Merge, we find that the work
remains the same, but the critical-path length is now
T∞(n) = T∞(n/2) + Θ(lg2 n)
= Θ(lg3 n)
(20)
(21)
by case 2 of the Master Theorem. The parallelism is now Θ(n lg n)/Θ(lg3 n) = Θ(n/ lg2 n). By using a more
clever algorithm, a parallelism of Ω(n/ lg n) can be achieved.
While it is important to analyze the theoritical bounds of algorithms, it is also necessary that the
algorithms perform well in practice. One short-coming of Merge-Sort is that it is not in-place. An
in-place parallel version of Quick-Sort exists, which performs better than Merge-Sort in practice.
Additionally, while we desire a large parallelism, it is good practice to design algorithms that scale down
as well as up. We want the performance | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
Additionally, while we desire a large parallelism, it is good practice to design algorithms that scale down
as well as up. We want the performance of our parallel algorithms when running on one processor to compare
well with the performance of the serial version of the algorithm. The best sorting algorithm to date requires
only 20% more work than the serial equivalent. Coming up with a dynamic multithreaded algorithm which
works well in practice is a good research project.
9 | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
15.082J, 6.855J, and ESD.78J
September 21, 2010
Eulerian Walks
Flow Decomposition and
Transformations
Eulerian Walks in Directed Graphs in O(m) time.
Step 1. Create a breadth first search tree into node
1. For j not equal to 1, put the arc out of j in T
last on the arc list A(j).
Step 2. Create an Eulerian cycle by starting a walk
at node 1 and selecting arcs in the order they
appear on the arc lists.
2
Proof of Correctness
Relies on the following observation and invariant:
Observation: The walk will terminate at node 1.
Whenever the walk visits node j for j ≠ 1, the walk
has traversed one more arc entering node j than
leaving node j.
Invariant: If the walk has not traversed the tree arc
for node j, then there is a path from node j to
node 1 consisting of nontraversed tree arcs.
Eulerian Cycle
Animation
3
Eulerian Cycles in undirected graphs
Strategy: reduce to the directed graph problem as
follows:
Step 1. Use dfs to partition the arcs into disjoint
cycles
Step 2. Orient each arc along its directed cycle.
Afterwards, for all i, the number of arcs entering
node i is the same as the number of arcs leaving
node i.
Step 3. Run the algorithm for finding Eulerian
Cycles in directed graphs
4
Flow Decomposition and Transformations
Flow Decomposition
Removing Lower Bounds
Removing Upper Bounds
Node splitting
Arc flows: an arc flow x is a vector x satisfying:
Let b(i) = ∑j xij
- ∑i xji
We are not focused on upper and lower bounds
on x for now.
5
Flows along Paths
Usual: represent flows in terms of flows in arcs.
Alternative: represent a flow as the sum of flows
in paths and cycles.
2
1
2
2
3
P
2
2
4
5
Two units of flow
in the path P
5
1
1
4
1
1
2 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
2
2
4
5
Two units of flow
in the path P
5
1
1
4
1
1
2
1
3
1
C
One unit of flow
around the cycle C
6
Properties of Path Flows
Let P be a directed path.
Let Flow(,P) be a flow of units in each arc of
the path P.
2
1
2
2
3
P
2
2
4
5
Flow(2, P)
Observation. If P is a path from s to t, then
Flow(,P) sends units of δ flow from s to t, and has
conservation of flow at other nodes.
7
Property of Cycle Flows
If p is a cycle, then sending one unit of flow along
p satisfies conservation of flow everywhere.
5
1
1
4
1
1
1
3
2
1
8
Representations as Flows along Paths and Cycles
Let P be a collection of Paths; let f(P) denote the
flow in path P
Let C be a collection of cycles; let f(C) denote the
flow in cycle C.
One can convert the path and cycle flows into an
arc flow x as follows: for each arc (i,j) ∈ A
xij = ∑P∋(i,j) f(P) + ∑C∋(i,j) f(C)
9
Flow Decomposition
x:
y:
Initial flow
updated flow
G(y): subgraph with arcs (i, j) with yij > 0 and
incident nodes
Flow around path P (during the algorithm)
paths with flow in the decomposition
cycles with flow in the decomposition
f(P)
P:
C:
INVARIANT
xij = yij + ∑P∋(i,j) f(P) + ∑C∋(i,j) f(C)
Initially, x = y and f = 0.
At end, y = 0, and f gives the flow decomposition.
10
Deficit and Excess Nodes
Let x be a flow (not necessarily feasible)
If the flow out of node i exceeds the flow into node
i, then node i is a deficit node.
Its deficit is | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
(not necessarily feasible)
If the flow out of node i exceeds the flow into node
i, then node i is a deficit node.
Its deficit is ∑j xij - ∑k xki.
If the flow out of node i is less than the flow into
node i, then node i is an excess node.
Its excess is -∑j xij + ∑k xki.
If the flow out of node i equals the flow into node i,
then node i is a balanced node.
11
Flow Decomposition Algorithm
Step 0. Initialize: y := x; f := 0; P := ∅ ; C:= ∅;
Step 1. Select a deficit node j in G(y). If no deficit node exists,
select a node j with an incident arc in G(y);
Step 2. Carry out depth first search from j in G(y) until finding a
directed cycle W in G(y) or a path W in G(y) from s to a node t
with excess in G(y).
Step 3.
1. Let Δ = capacity of W in G(y). (See next slide)
2. Add W to the decomposition with f(W) = Δ.
3. Update y (subtract flow in W) and excesses and deficits
4.
If y ≠ 0, then go to Step 1
12
Capacities of Paths and Cycles
The capacity of C is
= min arc flow on C
wrt flow y.
capacity = 4
1
5
4
4
2
5
8
9
7
deficit = 3
6
s
4
4
7
C
2
P
excess = 2
5
t
3
9
The capacity of P is
denoted as D(P, y) =
min[ def(s), excess(t),
min (xij : (i,j) ∈ P) ]
capacity = 2
Flow Decomposition
Animation
13
Complexity Analysis
Select initial node:
O(1) per path or cycle, assuming that we
maintain a set of supply nodes and a set of
balanced nodes incident to a positive flow arc
Find cycle or path
O(n) per path or cycle since finding the next | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
balanced nodes incident to a positive flow arc
Find cycle or path
O(n) per path or cycle since finding the next
arc in depth first search takes O(1) steps.
Update step
O(n) per path or cycle
14
Complexity Analysis (continued)
Lemma. The number of paths and cycles found in the
flow decomposition is at most m + n – 1.
Proof.
In the update step for a cycle, at least one of
the arcs has its capacity reduced to 0, and the arc is
eliminated.
In an update step for a path, either an arc is
eliminated, or a deficit node has its deficit reduced to
0, or an excess node has its excess reduced to 0.
(Also, there is never a situation with exactly one
node whose excess or deficit is non-zero).
15
Conclusion
Flow Decomposition Theorem. Any non-negative
feasible flow x can be decomposed into the
following:
i. the sum of flows in paths directed from deficit
nodes to excess nodes, plus
ii. the sum of flows around directed cycles.
It will always have at most n + m paths and cycles.
Remark. The decomposition usually is not unique.
16
Corollary
A circulation is a flow with the property that the
flow in is the flow out for each node.
Flow Decomposition Theorem for circulations. Any
non-negative feasible flow x can be decomposed
into the sum of flows around directed cycles.
It will always have at most m cycles.
17
An application of Flow Decomposition
Consider a feasible flow where the supply of node 1 is
n-1, and the supply of every other node is -1.
Suppose the arcs with positive flow have no cycle.
Then the flow can be decomposed into unit flows
along paths from node 1 to node j for each j ≠ 1.
18
jxijjxjin1ifi11ifi1A flow and its decomposition
5
1
1
4
-1
2
3
-1
3
-1
4
1
5
- | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
decomposition
5
1
1
4
-1
2
3
-1
3
-1
4
1
5
-1
1
6
-1
The decomposition of flows yields the paths:
1-2, 1-3, 1-3-4
1-3-4-5 and 1-3-4-6.
There are no cycles in the decomposition.
19
Application to shortest paths
To find a shortest path from node 1 to each other
node in a network, find a minimum cost flow in
which b(1) = n-1 and b(j) = -1 for j ≠ 1.
The flow decomposition gives the shortest paths.
20
Other Applications of Flow Decomposition
Reformulations of Problems.
There are network flow models that use path
and cycle based formulations.
Multicommodity Flows
Used in proving theorems
Can be used in developing algorithms
21
The min cost flow problem (again)
The minimum cost flow problem
uij = capacity of arc (i,j).
cij = unit cost of flow sent on (i,j).
xij = amount shipped on arc (i,j)
Minimize
∑ cijxij
∑j xij - ∑k xki = bi
and 0 ≤ xij ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
24
The model seems very limiting
• The lower bounds are 0.
• The supply/demand constraints must be satisfied
exactly
• There are no constraints on the flow entering or
leaving a node.
We can model each of these constraints using
transformations.
•
In addition, we can transform a min cost flow
problem into an equivalent problem with no
upper bounds.
23
Eliminating Lower Bound on Arc Flows
Suppose that there is a lower bound lij on the arc flow in
(i,j)
Minimize ∑ cijxij
∑j xij - ∑k xki = bi
and lij ≤ xij ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
Then let yij = xij - lij. Then xij = yij + lij
Minimize ∑ cij(yij | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
i ∈ N.
Then let yij = xij - lij. Then xij = yij + lij
Minimize ∑ cij(yij + lij)
∑j (yij + lij) - ∑k (yij + lij) = bi
and lij ≤ (yij + lij) ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
Then simplify the expressions.
26
Allowing inequality constraints
Minimize ∑ cijxij
∑j xij - ∑k xki ≤ bi
and lij ≤ xij ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
Let B = ∑i bi . For feasibility, we need B ≥ 0
Create a “dummy node” n+1, with bn+1 = -B. Add arcs
(i, n+1) for i = 1 to n, with ci,n+1 = 0. Any feasible
solution for the original problem can be transformed
into a feasible solution for the new problem by
sending excess flow to node n+1.
27
Node Splitting
1
5
6
2
3
44
5
5
4
Flow x
6
Arc numbers
are capacities
Suppose that we want to add the constraint that the
flow into node 4 is at most 7.
Method: split node 4 into two nodes, say 4’ and 4”
1
5
6
2
3
7
4’
4”
4
6
5
5
Flow x’ can be
obtained from
flow x, and vice
versa.
26
Eliminating Upper Bounds on Arc Flows
The minimum cost flow problem
Min ∑ cijxij
s.t. ∑j xi - ∑k xki = bi for all i ∈ N.
and 0 ≤ xij ≤ uij for all (i,j) ∈ A.
bi
i
7
i
Before
xij
uij
5
20
bj
j
-2
j
bi-uij
i
uij-xij
After
uij
<i,j>
-13
i
15
20
<i,j>
xij
5
bj
j
-2
j
29 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
<i,j>
-13
i
15
20
<i,j>
xij
5
bj
j
-2
j
29
Summary
1. Efficient implementation of finding an eulerian
cycle.
2. Flow decomposition theorem
3. Transformations that can be used to incorporate
constraints into minimum cost flow problems.
28
MIT OpenCourseWare
http://ocw.mit.edu
15.082J / 6.855J / ESD.78J Network Optimization
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
18.156 Lecture Notes
Lecture 7
Lecturer: Larry Guth
Trans.: Cole Graham
February 20, 2015
In Lecture 6 we developed the following continuity method for proving isomorphisms on Banach spaces:
Proposition 1. Let X and Y be Banach spaces, let I be a connected subset of R, and let Lt : X → Y be a
continuous family of operators with t ∈ I. If Lt0 is an isomorphism for some t0 ∈ I, and there exists λ > 0
such that (cid:107)Ltx(cid:107)Y ≥ λ (cid:107)x(cid:107)X for all x ∈ X and all t ∈ I, then Lt is an isomorphism for all t ∈ I.
We will now use α-H¨older norm estimates related to Schauder’s inequality to establish an isomorphism
theorem for the elliptic Dirichlet problem on discs. Let L be an elliptic operator satisfying the usual hy-
potheses, i.e.
(cid:88)
Lu =
aij∂i∂ju
(cid:107)
C (B1)
with aij
(cid:107)
¯by Lu := (Lu, u|∂B1 ). The principal result of this lecture is:
β and 0 < λ
≤
≤
α
¯
eig( aij ) Λ < . Define the map L : C 2,α(B1) → C α(B1)×C 2,α(∂B1)
} ≤
∞
{
¯
i,j
Theorem 1. If L obeys the usual hypotheses then L is an isomorphism.
¯
We may restate this result as follows:
Corollary 1. For all f ∈ C α(B1) and all ϕ ∈ C 2,α(∂B
Lu = f on B1 and u|∂B1 = ϕ.
1) there exists a unique u ∈ C (B1) such that
2,α
¯
To establish Theorem 1, we verify that ∆ is an isomorphism, and show that Lt := (1 − t)∆ + tL satisfies
the hypotheses of Proposition 1. To prove both these statements we will rely heavily on the following version
of Schauder’s | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
satisfies
the hypotheses of Proposition 1. To prove both these statements we will rely heavily on the following version
of Schauder’s inequality:
¯
Theorem 2 (Global Schauder). Suppose u ∈ C 2,α(B1) and L satisfies the usual hypotheses. Let f := Lu
and ϕ := u|∂B1. Then
(cid:107)u(cid:107)C2,α ¯(B1) ≤ C(n, α, λ, Λ, β)
(cid:104)
(cid:107)f (cid:107)Cα(B1) + (cid:107)ϕ(cid:107)C2,α(∂B
1)
(cid:105)
.
(1)
The Banach spaces involved in this bound, namely C 2,α(B
1), C (B1), and C 2,α(∂B1) motivate the
definition of the map L. Indeed, we have defined the map L : C 2,α(B1) → C α(B1) × C 2,α(∂B1) because (1)
is precisely the form of quantitative injectivity required to apply Proposition 1 to the family Lt. We also use
Theorem 2 to show:
¯
¯
¯
¯
α
1
Proposition 2. ∆ is an isomorphism.
¯
¯
Proof. From the preceding lecture, it is sufficient to show that ∆ is surjective and satisfies an injectivity
estimate of the form found in Proposition 1. To prove surjectivity, fix f ∈ C α(B1) and ϕ ∈ C 2,α(∂B1).
c (Rn). Define w := F ∗ Γn, where Γn is the fundamental solution to the Laplacian
Extend f to F ∈ C α
considered in earlier lectures. Then w ∈ C 2,α(Rn) and ∆w = f on B1. However, there is no reason to
expect that w|∂B1 = ϕ. To rectify this issue, use the Poisson kernel to find v ∈ C 2,α(B1) such that ∆v = 0
on B | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
issue, use the Poisson kernel to find v ∈ C 2,α(B1) such that ∆v = 0
on B1 and v|∂B1 = ϕ − w|∂B1. Set u = v + w ∈ C 2,α(B1). Then ∆u = ∆v + ∆w = f on B1 and
¯
u|
∂B1 + w|∂B1 = ϕ. Hence ∆u = (f, ϕ), so ∆ is surjective. Theorem 2 shows that
∂B = v|
¯
1
(cid:107)f (cid:107)Cα(B ) + (cid:107)ϕ(cid:107)C2,α(∂B ) ≥ C(n, α, 1, 1, 1)
1
1
−1 (cid:107)u(cid:107)C2,α ¯(B1) ,
(cid:13) ¯∆u(cid:13)
so (cid:13)
together with surjectivity this estimate proves that ∆ is an isomorphism.
(cid:13) ≥ λ (cid:107)u(cid:107)
for all u ∈ C 2,α(B1), with λ = C(n, α, 1, 1, 1)−1 > 0. As we showed in the previous lecture,
¯
¯
Proof of Theorem 1. Consider the operator Lt for t ∈ [0, 1]. Because (cid:107)aij(cid:107)Cα(B1) ≤ β for all i, j,
(cid:107)(1 − t)δij + taij(cid:107)Cα
(1
1) ≤ −
(B
t) + tβ
≤
β
(cid:48),
where β(cid:48) := max{β, 1}. Similarly, we must have,
eig({(1 − t)δij + taij}) ⊂ [(1 − t) + tλ, (1 − t) + tΛ] ⊂ [λ(cid:48), Λ(cid:48)],
where λ(cid:48) := min{λ, 1} and Λ(cid:48) := max{Λ, 1}. Hence the operators Lt obey regularity | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
) := min{λ, 1} and Λ(cid:48) := max{Λ, 1}. Hence the operators Lt obey regularity and spectral bounds
which are uniform in t for t ∈ [0, 1]. Theorem 2 therefore implies that
(cid:107)Ltu(cid:107)Cα(B ) + (cid:107)u|∂B1(cid:107)C2,α(∂B ) ≥ C(n, α, λ
1
1
(cid:48), Λ(cid:48), β(cid:48))−1 (cid:107)u
(cid:107)C2,α ¯(B1)
for all u ∈ C 2,α ¯(B1) and all t ∈ [0, 1]. By Proposition 1, this regularity combined with Proposition 2 is
sufficient to establish Theorem 1.
In summary, we used explicit formulæ involving Γn and the Poisson kernel to establish the surjectivity
¯
¯
of ∆, and then use injectivity bounds furnished by the global Schauder inequality to conclude that ∆ and L
are in fact isomorphisms.
¯
It remains to verify the global Schauder inequality. We will read through the proof and fill in details
for homework. The essential difference between the global and interior Schauder inequalities lies in the
treatment of region boundaries. In the interior Schauder inequality proven previously, C 2,α regularity of u
on a ball is controlled by C 0 regularity of Lu on a larger ball. Global Schauder replaces regularity on a
larger domain with regularity on the boundary. Unsurprisingly therefore, the proof of global Schauder relies
on a form of Korn’s inequality which accounts for behavior near boundaries:
Theorem 3 (Boundary Korn). Let H := {x ∈ Rn; xn > 0} denote the upper half space. Let u ∈ C 2,α
such that u = 0 on ∂H. Then [∂2u]Cα(H) ≤ C(, α)[∆u]Cα(H).
c
¯
(H)
2
As with the standard Korn inequality, the proof of Theorem 3 is divided into two parts:
1. Find a formula for | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
(H)
2
As with the standard Korn inequality, the proof of Theorem 3 is divided into two parts:
1. Find a formula for ∂i∂ju in terms of ∆u
2. Bound the integral in the formula to obtain an operator estimate on the map ∆u (cid:55)→ ∂i∂ju.
To approach the first part of the proof, let u ∈ C 2,α
c
¯
(H), and extend ∆u to F : Rn → R by setting
F (x1, . . . , xn) = −∆u(x1, . . . , xn
,−1 −xn) when xn < 0.
Proposition 3. u = F ∗
¯
Γn on H.
Proof. Let w = F ∗ Γn. By the symmetry of Γn and the antisymmetry of F in xn, w = 0 when xn = 0.
That is, w vanishes on ∂H. Just as in previous work, w(x) → 0 as |x| → ∞ and ∆w = F on H. Hence
∆(u − w) = 0 on H, u − w = 0 on ∂H, and u − w → 0 as |x| → ∞. Applying the maximum principle to
ever larger semidiscs, we see that u = w on H.
The same arguments from the proof of the standard Korn inequality show that
∂i∂ju(x) = lim
ε→0+
(cid:90)
|
y
|>ε
F (x
−
y)∂i∂jΓn(y) dy + δijF (x)
1
n
for all x ∈ H. Define the operator TεF (x) := (cid:82)
|y|>ε
On the homework we will complete the operator norm part of the proof of boundary Korn:
F (x − y)∂i∂jΓn(y) dy and integral kernel K := ∂i∂jΓn.
c (H) + C α
Proposition 4. If F ∈ C α
ε < min{xn, x¯n} with x, x¯ ∈ H, then
c (H ) (but F | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
F ∈ C α
ε < min{xn, x¯n} with x, x¯ ∈ H, then
c (H ) (but F is permitted to be discontinuous on ∂H) and
−
|TεF (x) − TεF (x¯)| ≤
C(n, α) |x − x¯| ([F ]Cα(H) + [F ]Cα(H )).−
α
As in the proof of standard Korn, cancellation prop
erties of K are crucial to the proof of this operator
(cid:82)
estimate. For standard Korn we used the fact that
K = 0 for every radius r. This fact is not sufficient
Sr
for boundary Korn, however, because spheres centered at x or x¯ in H will intersect ∂H, where we have no
control on F . To fix this, we note that K enjoys even stronger cancellation:
Proposition 5. If Hr ⊂ Sr is any hemisphere,
(cid:82)
Hr
K = 0.
Proof. Γn is even, and hence so is its second deriv
that
ative ∂i∂jΓn = K. The substitution y (cid:55)→ −y then shows
(cid:90)
Hr
K =
1
2
(cid:90)
Sr
K = 0.
Now to prove Proposition 4 we may divide the integral TεF (x) into three rough regions:
1. ε < |y| < xn, where K cancels on whole spheres.
2. xn < |y| < R for some large R, which is a bounded region on which K is well-behaved.
3. |y| > R, on which the hemisphere cancellation of K is useful.
The details of the argument are left to the homework.
3
MIT OpenCourseWare
http://ocw.mit.edu
18.156 Differential Analysis II: Partial Differential Equations and Fourier Analysis
Spring 2016
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
Linear Spaces
we have seen (12.1-12.3 of Apostol) that n-tuple space
has the following properties :
V,
Addition:
1. (Commutativity) A + B = B + A.
2. (Associativity) A + (B+c) = (A+B) + C.
3. (Existence of zero) There is an element -0
such 'that A + -0 = A for all A.
4. (Existence of negatives) Given A, there is a
B such that A + B = -0.
Scalar multiplication:
5. (Associativity) c (dA) = (cd)A.
6. (Distributivity) (c+d)A = cA + dA,
c(A+B) = cA + cB.
7. (Multiplication by unity) 1A = A.
Definition. More generally, let V be any set of objects
(which we call vectors). And suppose there are two operations on
V, as follows: The first is an operation (denoted +) that
assigns to each pair A, B of vectors, a vector denoted A + B.
The second is an operation that assigns to each real number c
and each vector A, a vector denoted cA. Suppose also that the
seven preceding properties hold. Then V, with these two opera-
tions, is called a linear space (or a vector space). The seven
properties are called the axioms - -for a linear space.
i
There are many examples of linear spaces besides n--tuple space
'n
The study of linear spaces and their properties is dealt with in a subject called
Linear Algebra. WE! shall treat only those aspects of linear algebra needed
for calculus. Therefore we will be concerned only with n-tuple space
and with certain of its subsets called "linear subspaces" :
Vn
-Definition. Let W be a non-empty subset of Vn ; suppose W
is closed under vector addition and scalar | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
Vn
-Definition. Let W be a non-empty subset of Vn ; suppose W
is closed under vector addition and scalar multiplication. Then W is
called a linear subspace of V n (or sometimes simply a subspace of Vn .)
To say W is closed under vector addition and scalar multiplication
means that for every pair A, B of vectors of W, and every scalar c,
the vectors A + B a ~ dcA belong to W. Note that it is automatic that
the zero vector Q belongs to W, since for any A I
W, we have Q = OA.
Furthermore, for each A in W, the vector - A is also in W. This means
(as you can readily check) that W is a linear space in its own right (i.e.,
f
.
it satisfies all the axioms for a linear,space).
S~bspaces of
Vn
m y be specified in many different ways, as we shall
see.
Example 1. The s u b s e t o f Vn c o n s i s t i n g o f the 9 - t u p l e
alone i s a subspace of
Vn;
it i s ths " s m a l l e s t p o s s i b l e " sub-
space. Pad of course V,
i s by d e f i n i t i o n a subspace of Vn;
it i s the " l a r g e s t p o s s i b l e " subspace.
W;ample 2. Let A be a fixed non-zero vector, The subset of Vn
.
consisting of all vectors X of the form X = cA is a subspace of
'n
It is called the subspace spanned by A. In the case n = 2 or 3 , it can
be pictured as consisting of all vectors lying on a | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
. In the case n = 2 or 3 , it can
be pictured as consisting of all vectors lying on a line through the origin.
Example 3. Let A and B be given non-zero vectors that are not
1
parallel. The subset of Vn consisting of all vectors of the form
is a subspace of V
n o
It is called the subspace spanned by A and
B.
In the case n = 3, it can be pictured as consisting of all vectors lying
in the plane through the origin that contains A and B.
- - - A /
We generalize the construction given in the preceding
examples as follows:
A vector X of Vn of the form
Definition. Let S = $A?, ... a be a set of vectors in Vn .
1 1
... +*
X = c A +
is called a linear combination of the vectors Alr...,A,c . The set W of
all such vectors X is a subspace of Vn, as we will see; it is said to be
the subspace spanned by the vectors Al, ...,% . It is also called the
linear span of A l , ...,+ and denoted by L(S).
Let us show that W is a subspace of Vn. If X and Y
belong to W, then
I
X = clAl + - *
+ =kAk
and Y = d A + * * *
1 1
+ dkAkf
for soma scalars ci and di. We compute
X . + Y = (cl+dl)A1 f * * * + (ck+dk)Akr
ax = (ac ) A + * * * + (ack)Akf
1
1
so both X + Y and ax belong to W by definition. Thus W
is a subspace of Vn. -
Giving a spanning set for W is one standard way of specifying | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
. Thus W
is a subspace of Vn. -
Giving a spanning set for W is one standard way of specifying W.
Different spanning sets can of course give the same subspace. Fcr example,
it is intuitively clear that, for the plane through the origin in Example 3,
any_ two non-zero vectors C and D that are not parallel and lie in this
plane will span it. We shall give a proof of this fact shortly.
Example 4. The n-tuple space Vn has a natural spanning set,
namely the vectors
(0,0,0,.
..,1).
En =
I
These are often called the unit -coordinate vectors in
is e a s y to see that they span Vn, for i f . X = (xl,...,x ) is
an element of V,,
then
Vn.
It
n
I
In the case where n = 2, we often denote the unit
coordinate vectors El and E2 in V2 by I and 3 ,
?
respectively. In the case where n = 3, we often denote El,
E2, and- E3 by 1,
t t
I and
respectively. They are pic-
tured as in the accompanying figure.
Example 5. The subset W of V3 consisting of all vectors of
the form (a,b,O) is a subspace of V3. For if X and y
are 3-tuples whose third component is 0, so are X + Y and
cX. It is easy to see that W is the linear span of (1,0,0)
I
and (O,l,O).
Example 6. The subset of V3 consisting of all vectors of the
form X = (3a+2b,a-b,a+7b) is a subspace of V3. It consists
of all vectors of the form
X = a(3,1,1) + b | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
pace of V3. It consists
of all vectors of the form
X = a(3,1,1) + b(2,-1,7),
so it is the linear span of 3 1 , and (2,-1,7).
Example 1, The set W of all tuples (x1,x2,x3.x4) svch that
is a subspace of Vq . as you can check. Solving this equation for x4 , WE see
I
that a 4-tuple belongs to W if and only if it has the form
x = (xl, X2, x3, -3x1 + X . - 5 ~ ~ ) .
2
where xl acd x, and x3 are arbitrary. This element can be written in the form
L
It follaris that (1,O.O.-3) and (OtlrOtl) and (O,Ofl,-5) sy:an W.
Exercises
1. Show that the subset of V3 specified in Example 5 is a subspace
3
of V-. Do the same for the subset of V4 s~ecifiedin Example 7. What can
such that alxl + .. . + a x = 0
you say about the set 'of all x ,...x
n
in general? (Here we assume A = (al....,a ) is not the zero vector.) Csn you
give a geometric interpretation?
n n
n
2. In each of the following, let W denote the set of
I
I
all vectors (x,y,z) in Vj satisfying the condition given.
(Here we use (x,y,z) instead of (xl,x2,x3) for the general
element of V3.) Determine whether W is a subspace of V j .
If it is, draw a picture of it or describe it geometrically,
and find a spanning set for W.
(a) x = 0.
(b) x + y = O . | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
and find a spanning set for W.
(a) x = 0.
(b) x + y = O .
( c ) X + y = 1.
(dl x = y and 2x = 2.
(e) x = y or 2 x = z .
( f ) x 2 - y 2 = 0.
( 4 ) X 2 + * = 0 -
2
3. Consider the set F of all real-valued functions
defined on the interval [arb].
(a) Show that F is a linear space if f + g
denotes the usual sum of functions and cf denotes the usual
product of a function by a real number. What is the zero
vector?
(b) Which of the following are subspaces of F?
(i) All continuous functions.
(ii) All integrable functions.
(iii) All piecewise-monotonic functions.
(iv) All differentiable functions.
( v ) All functions f such that f (a) = 0.
(vi) All polynomial functions.
Ljnear independence
-Dc?finition. We say that the set S = I A ~ ,...,q;\ of vectors of vn
spans the vector X if X belongs to L(S), that is, if
X = c A + ... + ck+
1 1
for some scalars ci. If S spans the vector X, we say that S spans X
uniquely if the equations
X =
ciAi
i=l
and
imply that ci = di for all i.
It is easy to check the following:
Theorem 1, Let S = < A ~ ,.. .
be a set of vectors of Vn; let
X be a vector in L(S). Then S spans X uniquely if and only if S spans
i
the zero vector 2 uniquely.
Proof. Mte that - | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
uniquely if and only if S spans
i
the zero vector 2 uniquely.
Proof. Mte that -0 = 2OAi . This means that S spans the zero
i
vector uniquely if and only if the equation
implies that ci = 0 for all i.
Stippose S syms 2 uniquely. To show S spans X uniquely, suppose
X =
k
c.A.
1=1 1 1
and
X = 2 dilli.
i=l
Subtracting, we see that
whence ci - d. = 0, or c = di , for all i.
1
i
Conversely, suppose S spans X uniquely. Then
for some (unique) scalars x
i'
Now if
it follows that
Since S spans X uniquely, we must have xi = x + ci , or c = 0, for all 1. 0
i
i
This theorem implies that if S spans one vector of L(S) uniquely,
then it spans the zero vector uniquely, whence it spans every vector of L(S)
uniquely. This condition is important enough to be given a special name:
Definition. The set S = ZA] ,...,%J of vectors of V is said to
n
be linearly independent (or simply, independent) if it spans the zero vector
I
uniquely. The vectors themselves are also said to be independent in this
situation.
If a set is not independent, it is said to be dependent.
Banple 8. If a subset T of a set S is dependent, then S itself
is dependent. For if T spns Q ncn-trivially, so does S. (Just add on the
additional vectors with zero coefficients.)
This statement is equivalent to the statement that if S is independent,
then so is any subset of S.
Example 9. Any set containing the zero vector Q is dependent. For
example, if | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
then so is any subset of S.
Example 9. Any set containing the zero vector Q is dependent. For
example, if S = A ~ , . . . , arid A1 = 0, then
Example
The unit coordinate vectors E1,...,En in Vn span Q
uniquely, so they are independent.
i
Pample & Let S = A , .
..
. If the vectors Ai are non-zero
and mutually orthogonal, then S is independent. For suppose
Taking the dot product of both sides of this equation with A1 gives the equation
0 = C1 AlOA1
(since A. *A1 = 0 for i # 1) . NGW A1
whence cl = 0. Similarly, taking the dot product with Ai for the fixed index
# Q by hypothesis, whence A1 *A1 # 0,
1
j shows that c = 0.
j
Scmetimes it is convenient to replace the vectors
2
Bi = A ~ / ~ I A ~ ~ \ . Then the vectors B1,...,Bk are of & length and are mutually
orthogonal. Such a set of vectors is called an orthonormal set. The coordinate
by the vectors
Ai
vectors E
l form such a set.
n
B.mple
A set ccnsisting of a single vector A is independent
if A # Q.
A set consisting of two non-zero vectors ARB is independent if and
I
only if the vectors are not parallel. More generally, one has the following result:
Theorem 2- The set S = { A ~,. . .,\I is independent if and only if none
of the vectors A j can be written as a linear combination of the others.
Proof. Suppose first that one of the vectors equals a linear
combination 06 the others. For instance, suppose that
Al = c2A2 + * * * + ckAk:
then the following non-trivial linear combination equals zero: | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
Al = c2A2 + * * * + ckAk:
then the following non-trivial linear combination equals zero:
......
Conversely, if
where not all the ci are equal to zero, we can choose m so
that cm # 0, and obtain the equation
where the sum on the right extends over all indices different
from m.
Given a subspace W of Vnr there is a very important relation that
holds between spanning sets for W and independent sets in W :
Theorem 21 Let W be a subspace of Vn that is spanned by the k
vectors A1, ...,\ . Then any independent set of vectors in W contains at most
k vectors.
i
Proof. Let B ,...,B
be a set of vectors of W; let m 2 k. We
wish to show that these vectors are dependent. That is, we wish to find
scalars xl,...,x m ' ---nc;t all zero, such that
Since each vector B belongs to W, we can write it as a linear combination of
the vectors Ai .
as follows:
We do so, using a "double-indexing" notation for the coefficents,
j
Multiplying the equation by x and summing over j, and collecting terms, we
j
have the equation
In order for <x .B to equal 2 , it will suffice if we can choose the x
j
so that coefficient of each vector Ai in this equation equals 0. Ncw the
j
numbers aij
(homogeneous) system consisting of k equations in m unknowns. Since m > k,
are given, so that finding the x. is just a matter of solving a
3
there are more unknowns than equations. In this case the system always has a non-trivial
solution X (i.e., one different from the zero vector). This is a standard fact
about linear equations, which we now prove. a
First, we need a definition.
Definition. Given a homogeneous system of | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
is a standard fact
about linear equations, which we now prove. a
First, we need a definition.
Definition. Given a homogeneous system of linear equations, as in ( * )
following, a solution of the system is a vector (xl,...,x ) that satisfies
each equation of the system. The set of all solutions is a linear subspace of
n
V (as you can check). It is called the solution space of the system.
n
/
It is easy to see that the solution set is a subspace. If we let
be the n-tuple whose components are the coefficerts appearing in the
jth equation of the system, then the solution set consists of those X
~ u c h that A . * X = 0 for all j. If X and Y are two solutions, then
J
and
Thus X + Y and cX are also solutions, as claimed.
I
Theorem 4. Given a bornogeneous system of k linear equations
in n utlknow~ls. If k is less than n, then the solution space con-
tains some vector other t2ian 0.
I'ronf.. We are concer~tcd here only v i t h proving the existence of some
solutioli otlicr tJta11 0, not with nctt~nlly fitidirtg such a solution it1 practice,
nor wit11 finditig all possildd solutiot~s. (We 11-ill study the practical prob-
lem in nnuch greater rlctail in a later scctioti.)
We start with a system of lc ecluatio~is in ?t uriknowns:
Our procedure 11411 1)c to reduce tlic size of this system step-by-step by
elimitit~ting first XI,tlleri x2, and so on. After k - 1 steps, we mil1 be re-
duced to solvitig just one c q t t a t i o ~ ~ and this will be easy. | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
-
duced to solvitig just one c q t t a t i o ~ ~ and this will be easy. But a certain
instance, if all = . . =
nmount, of care is ticeded it1 the dcscriptiorl-for
akl = 0, i t is nonset~se to spcak of "elirninnting" XI, since all its coefi-
cierits are zero. \Ve I ~ a v e t o a l l o ~ i ~
for this possibility.
'L'obegirt then, if all the cocflicic~tts of st are zero, you may verify t h a t
the vector (fro,. . . ,0) is n solution of the system which is different from
0 , and you nre done. Othcr~risc, a t Icast one of the coefiicielits of st is
nonzero, attd 1t.e rncly s~tj~poscfor cortvetlier~ce that the equations have
beerr arranged so that this happetls ill the first ec~uation, with the result
that 0 1 1 + 0. We rnultiply the first crlt~ation 1)y the scalar azl/afl and then
suhtract i t from the second, eli~nitiat~itlg tghe XI-term from the second
et~uatiori. Si~rtilarly, we elirninatc the xl-term in each of the remaining
equations. 'I'he result is a ttcw system of liriear equatioris of the form
1
Now any soltttiolr of this 11c1v ssystoc~n of cqttntiol~s is also a solution of the
old system (*), because we cat1 recover the old system from the new one: .
we merely multiply the first erlttat | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
), because we cat1 recover the old system from the new one: .
we merely multiply the first erlttatiorl of
tlre systcm (**) by the same
scalars we used before, aild.then tve add i t to the corresponding later
equations of this system.
The crucial ttlirig about what n c hnve done is contained in the follorvirlg
statement: If the smaller system etlclosed in the box above has a solution
other than the zero vector, tlictr thc Ia>rgcr system (**) also has a solution
other than the zcro ~ c c t o r [so tJ1lnt the origitinl system (*) tve started
wit21 llns a solutio~lo t l ~ c rthan the zcro vcctor). $Ire prove this as follows:
Sr~ppose(d2, . . . , d.,) is a solutiolr of t l ~ o stna1.ller system, different from
, . . . ,. We su1)stitutc itito tllc first equation and solve for XI,thereby
obtailiirlg the follo~ving vector,
w1ricI1 yo11 may verify is a s o l ~ ~ t ~ i o nof t,hc Iargcr systcm (**).
In this v a y we havc rc!clnc:e~l tlio sixo of our problcrn; we now tlccd only
to prove otir ttlcorcrn for a sysf,ern of Ic - 1 ecluntions in n - 1unknowns.
tilnc, tve reduce the prol~lem to prov-
If ~ v c apply this reductio~l n scct)~~ci
ing the theorem for a systern of k - 2 ccl~tatiol~sin n - 2 unkno~v~rs.Con-
tinuing in this way, after lc - 1 eIimiriation steps it1 + all, we will be down
to | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
-
tinuing in this way, after lc - 1 eIimiriation steps it1 + all, we will be down
to a system cot~sistitig of orlly one ecll~nt,ion, it1 n - k
1unlrno~vns.Now
n - Ic + 1 2 2, because IVC ass~trnedns our hypothesis that n > Ic; thus
our problem retluccs to proving the follo~trillg ststemcrrt: a "system" con-
sistitig of otze li~icnrhornogcneous ccluntion it1 two or ntore unkno~vrlsalways
has a solutioll other than 0.
WE! leave it to you to show that this statement holds.(Be sure you
ccnsider the case where one or more or all of the coefficents are zero.) a
-E2ample 13. We have already noted that the vectors El,...,E span all
It. follows, for example, that any three vectors in V2 are dependent,
n
of Vn.
that is, one of them equals a linear combination of the others. The same holds
for any four vectors in
Vj.
The accompanying picture ~ k e s these facts plausible.
I
Similarly, since the vectors Elt..-,E are independent, any spanning
n
set of Vn must contain at least n vectors. Thus no two vectors can span
and no set of three vectors can span v4
.
V3'
Theorem 5. Let W be a subspace of Vn that does not consist of
-0 alone. Then:
(a) The space W has a linearly indepehdent spanning set.
(b) Any two linearly independent spanning sets for W have the same
number k of elements;
>
k < n unless W is all of Vn.
-- Proof. (a) Choose A1 # - 0 in W. Then the set {A1> is independent.
In general, | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
) Choose A1 # - 0 in W. Then the set {A1> is independent.
In general, suppose A l . A
is an independent set of vectors of W. If
this set spans W, we are finished. Otherwise, we can choose a vector Ai+l
of W that is not in L(Alr.. .,A.) . Then the set 1A1, ...,A A ~ + ~ ] is
independent: For suppose that
i'
1
not all zero. If c ~ + ~
= 0 , this equation contradicts
for some scalars c
independ ecce of A , ..A
for A
i
# 0, we can solve this equation
contradicting the fact that Ai+l does not belong to L(Alt....A,).
r while if c ~ + ~
1
Cc~ntinuing the process just described, we can find larger and larger
independent sets of vectors in W. The process stops only when the set we obtain
spans W. Does it ever stop? Yes, for W is contained in Vn, ar;d V contains
n
i
no more than n independent vectors. Sc, the process cannot be repeated
indefinitely!
(b) Suppose S = IA~,
.. . \\
and T = B ... B
are two
ljnearly independent spanning sets for W. Because S is independent and T
spans W, we must have k <_ j , by the preceding theorem. Because S sy:ans
W and T is independent, we must have k z j . Thus k = j.
Nclw Vn contains no more than n independent vectors; therefore we
must have k 5 n. Suppose that W is not all of Vn. Then we can choose
a vector 2$+1
of Vn that is not in W. By the argument just given, the
set A
... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
2$+1
of Vn that is not in W. By the argument just given, the
set A
...+
Definition. Given a subspace W of Vn that does not consist of Q
is independent. It follows that W1 < n, so that k in. 0
alone, it has a linearly independent spanning set. Any such set is called a
)
basis for W, and the number of elements in this set is called the dimension of W.
We make the convention that if W consists of -0 alone, then the dimension of
W:
is zero.
Example 14. The space Vn has a "naturaln basis consisting of the
vectors E1,...,E
. It follows that Vn has dimension n. (Surprise!) There
n
are many other bases for Vn., For instance, the vectors
form a basis for V,,
as you can check.
I
I
1
i
~xercises
1. Consider the subspaces of V3 listed in Exercise 2, p. A6. Find bases for
each of these subspaces, and firid spanning sets for them that are -not bases.
2. Check the details of Example 14.
3. Suppose W has dimension k. (a) Show that any independent set in
w consisting of k vectors spans W. (b) Show that any spanning set for W
consisting of k vectors is independent.
4. Let S = I A ~ ,.. . ,A > be a spanning set for W. Show that S
m
contains a basis for W. [Hint: Use the argument of Theorem 5.1
5. Let IA;,
...,G be an independent set in Vn . Show that this
set can be extended to a basis for Vn . [Hint: Use the argument of Theorem 5 . 1
6 . If V acd W are suSspaces of Vn and Vk, respectively, a | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
1
6 . If V acd W are suSspaces of Vn and Vk, respectively, a
function T : V + W is called a linear transformation if it satisfes the usual
linearity properties:
T(X + Y) = T(X) + T(Y),
If T is one-to-one and carries V onto W, it is called dl: linear.
isomorphism of vector spaces.
Stippose All.. .,A,, is a basis for V; let B1p.afBk be arbitrary
vectors of W. (a) Slim? there exists a linear transformation T : V + W
such that T(Ai) = Bi fc>r all i. (b) Show this linear transformation is unique.
7. L e t W be a subspace of Vn: let Al,...,% be a basis for W.
Let X, Y be vectors of W. Then X = 2 xjAi and Y = 2 yiAi for unique
scalars xi - m d y.. These scalars are called the camponents of X and Y,
respectively, relative to the basis Aif...f\.
1
(a) Note that
Conclude
that the function T : Vk --) W defia~d by T(xl,...,%) = 'f xiAi is a
1 inear isomorphism .
and
(b) Suppose that the basis A ,.
is an orthonormal basis. Show
A1 7
that X*Y = 2 x i Y i . Conclude that the isomorphism T of (a) preserves the
dot product, that is, T(x).T(Y)
= X*y .
8. Prove the following:
. b e 7
Theorem. If W is a subspace 'L
of Vn, then W has an orthonormal basis.
Froof. Step 1. Let B1,...,B be mutually orthogonal | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
then W has an orthonormal basis.
Froof. Step 1. Let B1,...,B be mutually orthogonal non-zero vectors
be a vector not in L(B l,...,B ). Given scalars
m
m
in Vn ;
cl,...,c
let Am+l
let
m '
-- A ~ + ~
+ C1B1 + "
CmBm
Bnt+l
Show that Bmcl is different from 2 and that L(B1,. ..,B ,B
L(B~,...,B ,A
orthogonal to each of B ..,B, .
m m+l
+
) - Then show that the ci mmy be so chosen that Bm+l is
m m+l) =
Steep 2. Show that if W is a subspace of Vn of positive dimension.
then W has a basis consisting of vectors that are mutually orthogonal.
i
[Hint: Proceed by induction on the dimension of W.]
Step 3. Prove the theorem.
Gauss-Jordan elimination
If W is a subspace of Vn, specified by giving a spanning set for
W, we have at present no constructive process for determining the dimension
of W nor of finding a basis for W, although we b o w these exist. There
is a simple procedure for carrying out this process; we describe it n w .
~efinitfon. The rectangular a r r G of numbers
!
is called a matrix of size k by n. The number a
is
ij
called the entry of A in the i-
th
Suppose we let Ai. be the vector
row and j-th column.
for i = 1,. ..lc.
subspace of Vn spanned by the vectors A l l...,% is called the row space
Theh Ai is just the ith row of the matrix A. The
of the matrix A.
WE now describe a procedure for determining the dimension of this space.
It involves applying operations to the matrix A, | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
of the matrix A.
WE now describe a procedure for determining the dimension of this space.
It involves applying operations to the matrix A, of the following types:
(1) Interchange two rows of A.
(2) Replace row i of A by itself plus a scalar multiple of another row,
say rcm m.
(3) Multiply row i of A by a non-zero scalar.
These operations are called the elcmentary & operations. Their usefulness cclmes
from the following fact:
Theorem 6. Suppose B is the matrix obtained by applying a sequence
of elementary row operations to A,successively. Then the row spaces of
A =d . B are the same.
--Proof. It suffices to consider the case where B is obtained by
be the rows of A,
applying a single row operation to A. Let All...,%
and let BI, ...,Bk be the rows of 8.
If the operation is of t y p ( I ) , these two sets of vectors are the
same (only their order is changed), so the spaces they span are the same.
If the operation is of type (3), then
B i = c A i
and B = A for j # i.
j
j
\
Clearly, any linear combination of
combination of Al....,+ Because c # 0 , the converse is also true.
Finally, suppose the operation is of type (2). Then
B1,...,Bk can be written as a linear
Bi = Ai + dAm m d B i = A for j f i .
J
j
Again, any linear combination of Bl,...,Bk can be written as a linear
combination of Al. ...,+ Because
and
A. = B
3
j
for j # i ,
the converse is also true. a
The Gauss-Jordan procedure consists of applying elementary row operations
to the matrix A until it is brought into a form where the dimension of its
row space is obvious. It is the following:
-
I G a | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
A until it is brought into a form where the dimension of its
row space is obvious. It is the following:
-
I G a u s s J o r d a n elimination. Examine the first column of your matrix.
I
(I) If this column consists entirely of zeros, nothing needs to ba
I
done. Restrict your attention now to the matrix obtained by deleting the
first column, and begin again.
%
(11) If this column has a non-zero entry, exchange rows if necessary
to bring it to the top row. Then add multiplesof the top row to the lower
rows so as to make all remaining entries in the first column into zeros.
Restrict your attention now to the matrix obtained by deleting the first
column and first row, and begin again.
The procedure stops when the matrix remaining has only one row.
k t us illustrate the procedure with an example.
Pr-oblem. Find the dimension of the row space of the matrix
Solution. First - step. Alternative (a) applies. Exchange rows (1)
and ( 2 ) , obtaining
,
!
Replace row (3) by row (3) + row ( 1 ) ; then replace (4)by (4)+ 2 times ( I) .
Second step. - Restrict attention to the matrix in the box. (11) applies.
Replace row (4) by row (4)- row (2) , obtaining
Third - step.
so nothing needs be done. One obtains the matrix
L_.
Restrict attention to the matrix in the box. (I) applies,
Fourth - -step. Restrict attention to the matrix in the box.
(11) applies.
Replace row (4) by row (4)- 77
row (3) , obtaining
The procedure is now finished. The matrix
we end up with is in what is called
echelon or "stair-stepUfom. The entries beneath the steps are zero. And
__3
the entries -1, 1, and 3 that appear at the "inside cornerst1 of the stairsteps
i
are non-zero. These entries that appear at the "inside cornersw of the sta | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
the "inside cornerst1 of the stairsteps
i
are non-zero. These entries that appear at the "inside cornersw of the stairsteps
are often called the pivots in the echelon form.
Yclu can check readily that the non-zero rows of the matrix B are
independent. (We shall prove this fact later.) It follows that the non-zero rows
of the matrix B form a basis for the row space of B, and hence a basis for
the row space of the original matrix A. Thus this row space has dimension 3.
The same result holds in general. If by elementary operations you
reduce the matrix A to the echelon form B, then the non-zero rows are B
are independent, so they form a basis for the row space of B, and hence a
b~.sisfor the row space of A.
Now we discuss how one can continue to apply elementary operations to
1
reduce the matrix B to an even nicer form. The procedure is this:
Begin by considering the last non-zero row. By adding multiples of this row
to each row above itr one can bring the matrix to the form where each entry lying
above the pivot in this row is zero. Then continue the process, working now
with the next-to-last non-zero row. Because all the entries above the last
pivot are already zero, they remain zero as you add multiples of the next-to-
last non-zero row to the rows above it. Similarly one continues. Eventually
the matrix reaches the form where all the entries that are directly above the
pivots are zero. (Note that the stairsteps do not change during this process,
nor do the pivots themselves.)
Applying this procedure in the example considered earlier, one brings
the matrix B into the form
Note that up to this point in the reduction process , we have used only
elementary row operations of types (1) and (2). It has not been necessary to
multiply a row by a non-zero scalar. This fact will be important later on.
WE?are not yet finished. The final step is to multiply each non-zero
row by an appropriate non-zero scalar, chosen so as to make the pivot entry
into 1. | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
The final step is to multiply each non-zero
row by an appropriate non-zero scalar, chosen so as to make the pivot entry
into 1. This we can do, because the pivots are non-zero. At the end of
this process, the matrix is in what is called reduced
echelon form.
The reduced
echelon form of the matrix C above is the matrix
As we have indicated, the importancs of this process comes from the
'!
following theorem:
Theorem 7. Let A be a matrix; let W be its row space. Suppose
we transform A by elementary row operations into the
echelon matrix B,
or into the reduced
echelon matrix D. Then the non-zero rows of B
are a basis for W, m d so are the non-zero rows of D.
-- Pr-oof. The rows of B span W, as we noted before; and so do the
rows of D. It is easy to see that no non-trivial linear combination of the
rmn-zero rows of D equals the zero vector , because each of these rows
has an entry of 1 in a position where the others all have entries of 0.
Thus the dimension of W equals the number r of non-zero rows of D.
This is the same as the number of non-zero rows of B . If the rows of B
\ifre not independent, thon one would equal a linear combination of the others.
Piis would imply that the row space of B could be spanned by fewer than
\
r rcws, which would imply that its dimension is less than r.
Exercises
I. Find bases for the row spaces of the following matrices:
2. Reduce the matrices in Exercise 1 to reduced
echelon form.
*3. Prove the following:
P~eorern. The reduced
echelon form of a matrix is unique.
Proof. Let D and D 1 be two reduced
echelon matrices, w5ose
rows span the same subspace W of Vn. We show that D = D'.
Let R | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
w5ose
rows span the same subspace W of Vn. We show that D = D'.
Let R
be the non-zero rows of D ; and suppose that the
pivots (first non-zero entries) in these rows occur in columns jl,...,j
k t
respectively.
( a ) =ow that the pivots of D 1 wrur in the colwols jl,...,jk.
[Hint: Lst R be a row of Dl; suppose its pivot occurs in column p. We
have R = c R + ... + c&
for some scalars ci . (Why?) Show that
1 1
c = 0 if ji< p. Derive a contradiction if p is not equal to any of
i
(b) If R is a row of D 1 whose pivot occurs in columr.~. jm , show
that R = Rm. [Hint: We have R = c.R
Show that ci = 0 for i Z m, and c = 1.1
+
1 1 ... + c k s for some scalars ci g
m
parametric equations - of lines - and planes - in Vn
Given n-tuples P and A, with A # Q, the -line
through P determined -by A is defined to be the set of all
points X such that
for some scalar t.
It is denoted by
L ( P ; A ) . The vector A is called a direction vector for the
line. Note that if P = 2, then L is simply the 1-dimensional subspace
of Vn spanned by A.
,, '
The equation ( * ) is often called a parametric equation
for the line, and t is called the parameter in this equation.
As t ranges over all real numbers, the corresponding point X
ranges over all points of the line L. When t = 0, then X = | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
numbers, the corresponding point X
ranges over all points of the line L. When t = 0, then X = P; when
t = 1, then X = P + A; when t = $, then X = P + %A; and so on. All
these are points of L.
Occasionally, one writesthe vector equation out in scalar
form as follows:
A26
where P = p , , p n and A = (al,...,a ) . These are called
the scalar parametric equations for the line.
-
n
Of course, there is no uniqueness here: a given line can
2
be represented by many different parametric equations. The
following theorem makes this result precise:
Theorem 8. --
The lines L(P;A) - and L(Q:B) - are equal - if
have a point -in common - and A - is parallel - to B.
- and only -if they - -
--Proof. If L(P;A) = L(Q;B), then the lines obviously have a point
in comn. Since P and P + A lie on the first line they also lie on
the second line, so that
for distinct scalars ti and t2. Subtracting, we have A = (t2-tl)B, so
A is parallel to B.
Conversely, suppose the lines intersect in a point R, and suppose
A and B are parallel. We are given that -
for some scalars tl ard t2, and that A = cE for some c # 0 . We can
solve these equations for P in terms of Q and B:
P = Q + t 2 B - tlA = Q + (t2-tl&JB.
Now, given any point X = P + tA of the line L(P;A), WE can write
X = P + tA = Q + ( | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
A of the line L(P;A), WE can write
X = P + tA = Q + (t2-tlc)B+ tcB.
Thus X belongs to the line L (Q;B).
Thus every point of L (P;A) belongs to L (Q:B) . The
symmetry of the argument shows that the reverse holds as well. O
Definition. It follows from the preceding theorem that
given a line, its direction vector is uniquely determined up to
a non-zero scalar multiple. We define two lines to be parallel
I
if t h e i r d i r e c t i o n v e c t o r s a r e p a r a l l e l .
Corollary 9. D i s t i n c t p a r a l l e l l i n e s c a n n o t i n t e r s e c t ,
Corollary 10-. Given - -a l i n e L - -and a p o i n t Q ,
t h e r e -i s
e x a c t l y --
one l i n e c o n t a i n i n g Q - -
P r o o f . Suppose L
t h a t i s p a r a l l e l - t o L.
is t h e l i n e L ( P ; A ) . Then t h e l i n e
L(Q;A) c o n t a i n s Q
and i s p a r a l l e l t o L. By Theorem 8 , any
o t h e r l i n e c o n t a i n i n g Q
and p a r a l l e l t o L
i s e q u a l t o t h i s
one. 0
Theorem - 11. Given - two d i s t i n c t p o i n t s | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
0
Theorem - 11. Given - two d i s t i n c t p o i n t s P - and Q ,
t h e r e -i s e x a c t l y --one l i n e c o n t a i n i n g -them.
P r o o f . L e t A = Q - P ;
t h e n A # -0. The l i n e L ( P ; A )
c o n t a i n s b o t h P
( s i n c e P = P + OA)
and Q
( s i n c e
Q = P + L A ) .
Now suppose L ( R ; B )
i s some o t h e r l i n e c o n t a i n i n g P
and Q. Then
f o r d i s t i n c t scalars tl and t 2 . I t f o l l o w s t h a t
s o t h a t the v e c t o r A = Q - P
from Theorem ' 8 t h a t
i s p a r a l l e l t o B.
I t f o l l o w s
Now we.study planes i n V,.
D e f i n i t i o n .
I f P i s a p o i n t of Vn
B a r e independent v e c t o r s of V n r w e def i n e t h e p l a n e through
P determined 2 A -and B
t o be t h e s e t of a l l p o i n t s X of
and i f . A
and
t h e form
where s and t run through a l l r e a l numbers. We denote | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
h e form
where s and t run through a l l r e a l numbers. We denote t h i s
p l a n e by M ( P ; A , B ) .
The e q u a t i o n ( * ) i s c a l l e d a p a r a m e t r i c e q u a t i o n f o r t h e
p l a n e , and s and t a r e c a l l e d t h e parameters i n t h i s equa-
t i o n .
I t may be w r i t t e n o u t a s n
s c a l a r e q u a t i o n s , i f d e s i r e d .
When
s = t = 0, then X = P; when s = 1 and t = 0, then X = P + A; when
s = 0 and t = 1, then X = P + B; and SO on.
/
1
/
,
Ncte that if P = 2, then t h i s plane is j u s t the 2-dimensional subspace
of Vn span&: by A and B.
J u s t as f o r l i n e s , a p l a n e has many d i f f e r e n t p a r a m e t r i c
r e p r e s e n t a t i o n s . More p r e c i s e l y , one h a s t h e following theorem:
Theorem 12. - The p l a n e s M ( P ; A , B ) - and M(Q:C,D) - are
have a p o i n t - | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
M ( P ; A , B ) - and M(Q:C,D) - are
have a p o i n t - i n common and t h e l i n e a r
i f and o n l y - i f t h e y - -
e q u a l - -
span of A - and B e q u a l s - t h e l i n e a r span - of C - and D.
I f t h e p l a n e s a r e e q u a l , t h e y obviously have a
-Proof.
point in c o m n . Fcrthermore, since P and P + A ard P + B all lie
A29
on the first plane, they lie on the second plane as well. Then
P + B = Q + s3c + t p ,
are some scalars s ar-d t
Subtracting, we see that
i
i '
A = (sZ-sl)C + (t2-tl)Df
B
(s3-s1)C + (t3-tl)D.
Thus A and B lie in the linear span of C and D. Symmetry shows that
C and D lie in the linear span of A and B as well. Thus these linear
spans are the same.
Conversely, suppose t h a t t h e p l a n e s i n t e r s e c t i n a p o i n t
R and t h a t L(A,B) = L(C,D). Then
P + s l A + t B =
1
R = Q + s 2 C + t 2 D
for some scalars si ard ti. We can solve this equation for P as follows:
P =
Q + (linear combination of A,B,C,D).
Then if X | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
this equation for P as follows:
P =
Q + (linear combination of A,B,C,D).
Then if X is any point of the first plane M(P;A,B), we have
X = P + s A + t B
for some scalars s and t,
= Q + (linear combination of A,B,C,D) + sA + tB
= Q + (linear combination of CID),
since A and B belong t o L(c,D),
T h u s X belongs to M (Q;C 1D).
Symmetry o f t h e argument shows t h a t every p o i n t of
M(Q;C,D) belongs t o M(P;A,B) a s w e l l . 0
D e f i n i t i o n . Given a p l a n e M = M(P;AtB), the v e c t o r s
A and B a r e n o t uniquely determined by M, but t h e i r l i n e a r
span is. W e s a y t h e p l a n e s M(P;A,B) and M(Q;C,D) a r e .
p a r a l l e l i f L(A,B) = L(C,D).
Corollary 13. TKO distinct parallel planes cannot intersect.
corollary - 14.
Given - a plane M - -
and a point Q, there -is
exactly one - plane containinq Q --
that is parallel - to M .
Proof. Suppose M = M ( P ; A , B ) . Then M(Q;A,B) is a
plane that contains Q and
is parallel to M. By Theorem 12
any other plane containing Q parallel to M is equal to
this one.
Definition. WE'say three points P,Q,R are | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
plane containing Q parallel to M is equal to
this one.
Definition. WE'say three points P,Q,R are collinear if they lie
on a line.
Lemma 15. The points P,Q ,R are collinear if and only if the vectdrs
Q-P =d R-P are dependent (i.e., parallel).
Proof. The line L(P; Q-P) is the one containing P and Q, and
theliae; L(P;R-P) istheonecontaining P and R. If Q-P m d R-P
i
are parallel, these lines are the same, by Theorem .8,so P, Q, and R
are collinear. Conversely, if 'P, Q, and R are collinear, these lines must
be the same, so that Q--P and ' R-P must be parallel. a,
Theorem 16-- . .' Given three - non-collinear points P, Q, R,
there - is exactly - one plane containing - them.
Proof. Let A = Q - P and B = R - P; then
A and B are independent. The plane M(P; A,B) cGntains P and P + A = Q
and P + B = R *
Now suppose M(S;C,D) is 'another plane containing P,
Q, and R. Then
'I
. Subtracting, we see that the vectors
for some scalars s
Q - P = A and R - P = B belong to the linear span ~f f2and D. By
and ti
i
symmetry, C and D belong to the linear span of A ad B. Then Theorem
12 implies that these two planes are equal.
Exercises
1. We say the line L is parallel to the plane
M = M(P;A,B) if the direction vector of L belongs to L(A,B).
Show | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
to the plane
M = M(P;A,B) if the direction vector of L belongs to L(A,B).
Show that if L is parallel to M and intersects M, then L
is contained in M.
2. Show that two vectors Al and A2 in Vn are
linearly dependent if and only if they lie on a line through
the origin.
3. Show that three vectors All A2, A3 in Vn are
linearly dependent if and only if they lie on some plane through
the origin.
Let
A = 1 - 1 0 B = (2,0,1).
(a) Find parametric equations for the line through P and Q, and
for the line through R with direction vector A. Do these lines intersect?
(b) Find parametric equations for the plane through PI Q, and
R. and for the plane through P determined by A and B.
5 . Let L be the line in Vj through the points P = (1.0.2) and
Q = (1,13). Let L1 be the line through 2 parallel to the vector
A = ( 3 - 1 ) Find parametric equations for the line that intersects both L
I
and Lf and is orthogonal to both of them.
Parametric equations for k-planes
Vn.
432
Following t h e p a t t e r n f o r l i n e s and p l a n e s , one c a n d e f i n e , more
g e n e r a l l y , a k - p l a n e i n Vn
a s f o l l o w s :
D e f i n i t i o n . Given a p o i n t P o f Vn
and a s e t
A ~ , . . . , A ~ o f k
i n d e p e n d e n t v e | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
e t
A ~ , . . . , A ~ o f k
i n d e p e n d e n t v e c t o r s i n V,,
k - p l a n e t h r o u g h P d e t e r m i n e d & Al, ....Ak
a l l v e c t o r s X of t h e form
w e d e f i n e t h e
t o b e t h e s e t o f
f o r some s c a l a r s ti. W e d e n o t e t h i s s e t o f p o i n t s by
M(P;A1,.. ., A k ) .
S a i d d i f f e r e n t l y . X
i s i n t h e k - p l a n e M (P;Al,. . .,Ak'
i f and o n l y i f X - P
i s i n t h e l i n e a r s p a n o f A1, ...,Ak. -
Note that if P = 2, then t h i s k-plane is just the k- dimensional
1
linear subspace of Vn s p ~ e dby A1, ...,%.
J u s t a s w i t h t h e c a s e o f l i n e s
( 1 - p l a n e s ) and p l a n e s
( 2 - p l a n e s ) , o n e h a s t h e f o l l o w i n g r e s u l t s :
Theorem 12, Let MI = M(P;A
and 3 = M ( Q ; B ~ ' 1 * * * 1 ~1k
be two k-planes i n Vn. Then M | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
~ ' 1 * * * 1 ~1k
be two k-planes i n Vn. Then M1 = M2
i f and only if they have a point i n
commn and the linear span of A I I . . . I %
equals the linesr span of B1, ...,Bk.
Definition. We say that the k-planes M1
and M2 of t h i s theorem
are p a r a l l e l i f the linear span of A I I . . - I %
equals the linear span of
B1?...,Bk.
Theorem 1 8 . Given
k - p l a n e M
t h e r e -i s e x a c t l y -one k - p l a n e -i n V,
Q ,
p a r a l l e l -
t o
M..
Vn
5 p o i n t
c o n t a i n i n g Q
.
..-
4
-- Lemma 19. Given points POt..,Pk
in Vn,
they are contained i n
a plane of dimension l e s s than k
i f and only i f the vectors
I
P1 - Po,..., Pk- Po are dependent.
Theorem 2Q. Given k+l distinct points Po,...,Pk in Vn.
If these points.do not lie in any plane of dimension less than k, tten
there is exactly onek-plane containing them; it is the k-plane
More generally, we make the following definition:
-Definition. If M1 = M(P:Al, ...,$) is a k-plane, and
M2 = M(Q;B~,...,B ) is an m-plane, in Vn , and if k s m, we say
MI is parallel to M2 if the linear span of | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
in Vn , and if k s m, we say
MI is parallel to M2 if the linear span of Al,...,% is contained
.
in the linear span of B1,...,B,
m
-Brercises
1. Prove Theorems 17 and 18.
2. Prove Theorems 19 and 20.
3. Given the line L = L(Q;A) in Vj . where A = (1.-1,2).
Find parametric equations for a 24lane containing the point P = (1,1,1)
that is parallel to L. Is it unique? C m you find such a plane containing
both the point P and the point Q = (-1,0,2)?
4. Given the 2-plane MI. in V4 containing the points P = (1,-I, 2,-1)
and Q = (O.l,l,O)wd R = (lrl,0,3).Find parametric equations for a 3-plane
in Vq that containsthe point S = (l,l,l.) and is parallel to MI.
Is it unique? Can you find such a 3-plane thatcontains both S and the
point T = (0,1,0,2)?
MIT OpenCourseWare
http://ocw.mit.edu
18.024 Multivariable Calculus with Theory
Spring 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/0924c923e90a09cf95778ba4543644d6_MIT18_024s11_ChAnotes.pdf |
Verilog
L5: Simple Sequential Circuits and Verilog
L5: Simple Sequential Circuits and
Acknowledgements:
Materials in this lecture are courtesy of the following sources and are used with
permission.
Nathan Ickes
Rex Min
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
1
Key Points from L4 (Sequential Blocks)
Key Points from L4 (Sequential Blocks)
Classification:
(cid:132) Latch: level sensitive (positive latch passes input to output on high phase, hold
value on low phase)
(cid:132) Register: edge-triggered (positive register samples input on rising edge)
(cid:132) Flip-Flop: any element that has two stable states. Quite often Flip-flop also used
denote an (edge-triggered) register
Positive
Latch
D
D
Q
Q
D
D
Q
Q
Positive
Register
Clk
Clk
(cid:132) Latches are used to build Registers (using the Master-Slave Configuration), but
are almost NEVER used by itself in a standard digital design flow.
(cid:132) Quite often, latches are inserted in the design by mistake (e.g., an error in your
Verilog code). Make sure you understand the difference between the two.
(cid:132) Several types of memory elements (SR, JK, T, D). We will most commonly use
the D-Register, though you should understand how the different types are built
and their functionality.
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
2
System Timing Parameters
System Timing Parameters
In
D
Q
Clk
Combinational
Logic
D
Q
Clk
Register Timing Parameters
Logic Timing Parameters
Tcq : worst case rising edge
clock to q delay
Tcq, cd: contamination or
minimum delay from
clock to q
Tsu: setup time
Th: hold time
Tlogic : worst case delay
through the combinational
logic network
Tlogic,cd: contamination or
minimum delay
through logic network
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
3
System Timing (I): Minimum Period
System Timing (I): Minimum Period
CLout
D
Q
Clk
In
D
Q
Combinational
Logic | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
Period
System Timing (I): Minimum Period
CLout
D
Q
Clk
In
D
Q
Combinational
Logic
CLK
IN
FF1
CLout
Clk
Th
Tsu
Tcq
Tlogic
Tcq,cd
Tl,cd
Th
Tsu
Tcq
Tcq,cd
Tsu2
T > Tcq + Tlogic + Tsu
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
4
System Timing (II): Minimum Delay
System Timing (II): Minimum Delay
CLout
D
Q
Clk
Combinational
Logic
Th
In
D
Q
Clk
CLK
IN
FF1
CLout
Th
Tsu
Tcq,cd
Tl,cd
Tcq,cd + Tlogic,cd > Thold
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
5
The Sequential always
The Sequential
always Block
Block
(cid:132) Edge-triggered circuits are described using a sequential
always block
Combinational
Sequential
module combinational(a, b, sel,
out);
module sequential(a, b, sel,
clk, out);
input a, b;
input sel;
output out;
reg out;
always @ (a or b or sel)
begin
if (sel) out = a;
else out = b;
end
endmodule
out
a
b
1
0
sel
input a, b;
input sel, clk;
output out;
reg out;
always @ (posedge clk)
begin
if (sel) out <= a;
else out <= b;
end
endmodule
a
b
1
0
sel
clk
D Q
out
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
6
Importance of the Sensitivity List
Importance of the Sensitivity List
(cid:132) The use of posedge and negedge makes an always block sequential
(edge-triggered)
(cid:132) Unlike a combinational always block, the sensitivity list does
determine behavior for synthesis!
D Flip-flop with synchronous clear
D Flip-flop with asynchronous clear
module dff_sync_clear(d, clearb,
clock, q);
input d, clearb, | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
synchronous clear
D Flip-flop with asynchronous clear
module dff_sync_clear(d, clearb,
clock, q);
input d, clearb, clock;
output q;
reg q;
always @ (posedge clock)
begin
if (!clearb) q <= 1'b0;
else q <= d;
end
endmodule
module dff_async_clear(d, clearb, clock, q);
input d, clearb, clock;
output q;
reg q;
always @ (negedge clearb or posedge clock)
begin
if (!clearb) q <= 1’b0;
else q <= d;
end
endmodule
always block entered only at
each positive clock edge
always block entered immediately
when (active-low) clearb is asserted
Note: The following is incorrect syntax: always @ (clear or negedge clock)
If one signal in the sensitivity list uses posedge/negedge, then all signals must.
(cid:131) Assign any signal or variable from only one always block, Be
wary of race conditions: always blocks execute in parallel
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
7
Simulation (after Place and Route in Xilinx)
Simulation (after Place and Route in Xilinx)
(cid:131) DFF with Synchronous Clear
tc-q
Clear on Clock Edge
(cid:131) DFF with Asynchronous Clear
Clear happens on
falling edge of clearb
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
8
Blocking vs. Nonblocking
Blocking vs.
Assignments
Nonblocking Assignments
(cid:132) Verilog supports two types of assignments within always blocks, with
subtly different behaviors.
(cid:132) Blocking assignment: evaluation and assignment are immediate
always @ (a or b or c)
begin
x = a | b;
y = a ^ b ^ c;
z = b & ~c;
1. Evaluate a | b, assign result to x
2. Evaluate a^b^c, assign result to y
3. Evaluate b&(~c), assign result to z
end
(cid:132) Nonblocking assignment: all assignments deferred until all right-hand
sides have been evaluated (end of simulation timestep)
always @ (a or b or c)
begin
x <= a | b;
y <= a ^ b ^ c; | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
evaluated (end of simulation timestep)
always @ (a or b or c)
begin
x <= a | b;
y <= a ^ b ^ c;
z <= b & ~c;
end
1. Evaluate a | b but defer assignment of x
2. Evaluate a^b^c but defer assignment of y
3. Evaluate b&(~c) but defer assignment of z
4. Assign x, y, and z with their new values
(cid:132) Sometimes, as above, both produce the same result. Sometimes, not!
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
9
Assignment Styles for Sequential Logic
Assignment Styles for Sequential Logic
Flip-Flop Based
Digital Delay
Line
clk
in
D Q
q1
q2
D Q
D Q
out
(cid:132) Will nonblocking and blocking assignments both produce
the desired result?
module nonblocking(in, clk, out);
module blocking(in, clk, out);
input in, clk;
output out;
reg q1, q2, out;
always @ (posedge clk)
begin
q1 <= in;
q2 <= q1;
out <= q2;
end
endmodule
input in, clk;
output out;
reg q1, q2, out;
always @ (posedge clk)
begin
q1 = in;
q2 = q1;
out = q2;
end
endmodule
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
10
Use Use Nonblocking
for Sequential Logic
Nonblocking for Sequential Logic
always @ (posedge clk)
begin
always @ (posedge clk)
begin
q1 <= in;
q2 <= q1;
out <= q2;
end
q1 = in;
q2 = q1;
out = q2;
end
“At each rising clock edge, q1, q2, and out
simultaneously receive the old values of in,
q1, and q2.”
“At each rising clock edge, q1 = in.
After that, q2 = q1 = in.
After that, out = q2 = q1 = in.
Therefore out = in.”
q1
q2
in
D Q
D Q
D Q
out
in
q1 q2
out
D Q
clk
clk
(cid | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
2
in
D Q
D Q
D Q
out
in
q1 q2
out
D Q
clk
clk
(cid:132) Blocking assignments do not reflect the intrinsic behavior of multi-stage
sequential logic
(cid:132) Guideline: use nonblocking assignments for sequential
always blocks
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
11
Simulation
Simulation
(cid:131) Non-blocking Simulation
(cid:131) Blocking Simulation
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
12
Use Blocking for Combinational Logic
Use Blocking for Combinational Logic
Blocking Behavior
(Given) Initial Condition
a changes;
always block triggered
x = a & b;
y = x | c;
a b c x y
1 1 0 1 1
0 1 0 1 1
0 1 0 0 1
0 1 0 0 0
a
b
c
x
y
module blocking(a,b,c,x,y);
input a,b,c;
output x,y;
reg x,y;
always @ (a or b or c)
begin
x = a & b;
y = x | c;
end
endmodule
Nonblocking Behavior
a b c x y Deferred
module nonblocking(a,b,c,x,y);
(Given) Initial Condition
a changes;
always block triggered
x <= a & b;
y <= x | c;
Assignment completion
1 1 0 1 1
0 1 0 1 1
0 1 0 1 1 x<=0
0 1 0 1 1 x<=0, y<=1
0 1 0 0 1
input a,b,c;
output x,y;
reg x,y;
always @ (a or b or c)
begin
x <= a & b;
y <= x | c;
end
endmodule
(cid:132) Nonblocking and blocking assignments will synthesize correctly. Will both
styles simulate correctly?
(cid:132) Nonblocking assignments do not reflect the intrinsic behavior of multi-stage
combinational logic
(cid:132) While nonblocking assignments can be hacked to simulate correctly (expand
the sensitivity list), it’s not elegant | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
-stage
combinational logic
(cid:132) While nonblocking assignments can be hacked to simulate correctly (expand
the sensitivity list), it’s not elegant
(cid:132) Guideline: use blocking assignments for combinational always blocks
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
13
The Asynchronous Ripple Counter
The Asynchronous Ripple Counter
A simple counter architecture
(cid:134) uses only registers
Count[0]
Count[1]
Count[2]
Count[3]
Count [3:0]
(e.g., 74HC393 uses T-register and
negative edge-clocking)
D Q
Q
D Q
Q
D Q
Q
D Q
Q
(cid:134) Toggle rate fastest for the LSB
…but ripple architecture leads to
large skew between outputs
Clock
D register set up to
always toggle: i.e., T
Register with T=1
Skew
Count [3]
Count [2]
Count [1]
Count [0]
Clock
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
14
Verilog
The Ripple Counter in Verilog
The Ripple Counter in
Single D Register with Asynchronous Clear:
module dreg_async_reset (clk, clear, d, q, qbar);
input d, clk, clear;
output q, qbar;
reg q;
always @ (posedge clk or negedge clear)
begin
if (!clear)
q <= 1'b0;
else q <= d;
end
assign qbar = ~q;
endmodule
Structural Description of Four-bit Ripple Counter:
clk
Count[0]
Count[1]
Count[2]
Count[3]
Count [3:0]
D Q
Q
D Q
Q
D Q
Q
D Q
Q
Countbar[3]
Countbar[0]
Countbar[1] Countbar[2]
module ripple_counter (clk, count, clear);
input clk, clear;
output [3:0] count;
wire [3:0] count, countbar;
dreg_async_reset bit0(.clk(clk), .clear(clear), .d(countbar[0]),
.q(count[0]), .qbar(countbar[0]));
dreg_async_reset bit1(.clk(countbar[0]), .clear(clear), .d(countbar[1]),
.q(count[1 | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
0]));
dreg_async_reset bit1(.clk(countbar[0]), .clear(clear), .d(countbar[1]),
.q(count[1]), .qbar(countbar[1]));
dreg_async_reset bit2(.clk(countbar[1]), .clear(clear), .d(countbar[2]),
.q(count[2]), .qbar(countbar[2]));
dreg_async_reset bit3(.clk(countbar[2]), .clear(clear), .d(countbar[3]),
.q(count[3]), .qbar(countbar[3]));
endmodule
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
15
Simulation of Ripple Effect
Simulation of Ripple Effect
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
16
Logic for a Synchronous Counter
Logic for a Synchronous Counter
(cid:132) Count (C) will retained by a D Register
(cid:132) Next value of counter (N) computed by combinational logic
C3 C2 C1 N3 N2 N1
0
1
0
0
0
1
0
0
1
1
1
0
1
1
1
0
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
1
0
0
1
1
0
0
0
0
1
1
1
1
0
N1 := C1
N2 := C1 C2 + C1 C2
:= C1 xor C2
N3 := C1 C2 C3 + C1 C3 + C2 C3
:= C1 C2 C3 + (C1 + C2 ) C3
:= (C1 C2) xor C3
N1
C1
1
0
C3
0
1
1
0
1
0
C2
C3
1
0
1
0
1
0
C2
N3
C1
0
0
N2
C1
0
1
C3
1
1
0
1
1
0
C2
C1
C2
C3
D Q
D Q
D Q | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
1
1
0
1
1
0
C2
C1
C2
C3
D Q
D Q
D Q
CLK
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
17
From [Katz05]
The 74163 Catalog Counter
The 74163 Catalog Counter
(cid:132) Synchronous Load and Clear Inputs
(cid:132) Positive Edge Triggered FFs
(cid:132) Parallel Load Data from D, C, B, A
(cid:132) P, T Enable Inputs: both must be asserted
to enable counting
(cid:132) Ripple Carry Output (RCO): asserted when
counter value is 1111 (conditioned by T);
used for cascading counters
Synchronous CLR and LOAD
If CLRb = 0 then Q <= 0
Else if LOADb=0 then Q <= D
Else if P * T = 1 then Q <= Q + 1
Else Q <= Q
7
10
P
T
163
2
6
5
4
3
CLK
RCO
D
C
B
A
QD
QC
QB
QA
15
11
12
13
14
9
1
LOAD
CLR
74163 Synchronous
4-Bit Upcounter
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
18
Verilog Code for
Verilog
Code for ‘‘163163
(cid:132) Behavioral description of the ‘163 counter:
module counter(LDbar, CLRbar, P, T, CLK, D,
count, RCO);
input LDbar, CLRbar, P, T, CLK;
input [3:0] D;
output [3:0] count;
output RCO;
reg [3:0] Q;
always @ (posedge CLK) begin
if (!CLRbar) Q <= 4'b0000;
else if (!LDbar) Q <= D;
else if (P && T) Q <= Q + 1;
end
priority logic for
control signals
7
10
P
T
163
2
6
5
4
3
CLK
RCO
D
C
B
A
QD
QC
QB
QA
15
11
12
13
14
9
1
LOAD | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
C
B
A
QD
QC
QB
QA
15
11
12
13
14
9
1
LOAD
CLR
assign count = Q;
assign RCO = Q[3] & Q[2] & Q[1] & Q[0] & T;
endmodule
RCO gated
by T input
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
19
Simulation
Simulation
Notice the glitch on RCO!
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
20
Output Transitions
Output Transitions
(cid:132) Any time multiple bits
change, the counter output
needs time to settle.
(cid:132) Even though all flip-flops
share the same clock,
individual bits will change
at different times.
(cid:134) Clock skew, propagation
time variations
(cid:132) Can cause glitches in
combinational logic driven
by the counter
(cid:132) The RCO can also have a
glitch.
Figure by MIT OpenCourseWare.
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
21
Cascading the 74163: Will this Work?
Cascading the 74163: Will this Work?
VDD
bits 0-3
bits 4-7
bits 8-11
T
QA QB QC QD
T
QA QB QC QD
T
QA QB QC QD
‘163
RCO
DA DB DC DD
CL LD
P
‘163
RCO
DA DB DC DD
CL LD
P
VDD
‘163
RCO
DA DB DC DD
P
CL LD
CLK
(cid:132) ‘163 is enabled only if P and T are high
(cid:132) When first counter reaches Q = 4’b1111, its RCO goes high
for one cycle
(cid:132) When RCO goes high, next counter is enabled (P T = 1)
So far, so good...then what’s wrong?
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
22
Incorrect Cascade for 74163
Incorrect Cascade for 74163
Everything is fine up to 8’b11101111:
VDD
1 1 1 | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
63
Incorrect Cascade for 74163
Everything is fine up to 8’b11101111:
VDD
1 1 1 1
QA QB QC QD
T
‘163
RCO
DA DB DC DD
CL LD
P
VDD
1
P
0 1 1 1
QA QB QC QD
T
‘163
RCO
DA DB DC DD
CL LD
0 0 0 0
QA QB QC QD
T
‘163
RCO
DA DB DC DD
0
P
CL LD
CLK
Problem at 8’b11110000: one of the RCOs is now stuck high for 16 cycles!
VDD
0 0 0 0
QA QB QC QD
T
‘163
RCO
DA DB DC DD
CL LD
P
VDD
0
P
1 1 1 1
QA QB QC QD
T
‘163
RCO
DA DB DC DD
CL LD
1
P
0 0 0 0
QA QB QC QD
T
‘163
RCO
DA DB DC DD
CL LD
CLK
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
23
Correct Cascade for 74163
Correct Cascade for 74163
Master enable
P
QA QB QC QD
P
QA QB QC QD
T
CL LD
RCO
DA DB DC DD
T
CL LD
RCO
DA DB DC DD
(cid:132) P input takes the master enable
(cid:132) T input takes the ripple carry
assign RCO = Q[3] & Q[2] & Q[1] & Q[0] & T;
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
24
Summary
Summary
(cid:132) Use blocking assignments for combinational
always blocks
(cid:132) Use non-blocking assignments for sequential
always blocks
(cid:132) Synchronous design methodology usually used in
digital circuits
(cid:134)Single global clocks to all sequential elements
(cid:134)Sequential elements almost always of edge-triggered
flavor (design with latches can be tricky | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
)Single global clocks to all sequential elements
(cid:134)Sequential elements almost always of edge-triggered
flavor (design with latches can be tricky)
(cid:132) Today we saw simple examples of sequential
circuits (counters)
L5: 6.111 Spring 2006
Introductory Digital Systems Laboratory
25 | https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/09793dcc124f0226e93eb01591762630_l5_seql_verilog.pdf |
OpenCourseWare
1. Overview and some basics
9 August 2006
Spoken language conveys not only words, but a wide range of other information about timing,
intonation, prominence, phrasing, voice quality, rhythm etc. that is often collectively called
spoken prosody. These aspects of an utterance are sometimes called supra-segmental, because
they can span regions larger than a single phonemic segment (consonant or vowel). The ToBI
(Tones and Break Indices) transcription system is a method for transcribing two distinctive
aspects of the prosody of spoken utterances:
a) accents, which contribute to a word’s relative prominence in an utterance and
b) phrasing, which creates a grouping of words.
These prosodic aspects of spoken language can convey distinctive semantic, syntactic or even
morphological facts, and can be useful for tasks such as mapping prominence and grouping
patterns to meaning differences, understanding the effects of prominence and grouping on the
pronunciation of words, and synthesizing prosodically natural-sounding speech. Speech
scientists are interested in annotating the prosodic structure of large numbers of utterances in
order to study these phenomena, and ToBI was developed to provide a common transcription
system that could be used by many different laboratories, making it easier to share data.
An example of how prosodic prominence and grouping can convey differences in structure and
meaning can be seen in two different pronunciations of the word string It broke out in
Washington (Price et al. 1992, Lehiste 1987). On the one hand, this string can be produced with
a prominence on broke and a phrase boundary just after that word, corresponding to the
orth | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
a prominence on broke and a phrase boundary just after that word, corresponding to the
orthographic representation It BROKE, out in WASHington. <broke1> On the other hand, it can
be produced with a prominence on out and a phrase boundary just after that word, corresponding
to It broke OUT, in WASHington. <broke2>.
In this case, the two different patterns that correspond to two different syntactic structures and
two different meanings can be easily captured by orthographic conventions and punctuation, i.e.
upper case typeface and commas. But other differences, like the distinction between prominence
signaled by a high pitch vs. one signaled by a low pitch, or between a prominence signaled with
an early pitch peak vs. a later one, cannot be easily captured by conventional orthography. The
ToBI system, like other transcription systems, was developed to capture prosodic differences of
all types, not just the ones that are easy to write down in English orthography. As you will see,
the ToBI framework assumes that the differences that matter are phonological differences, i.e.
that the job of the transcription system is to capture the differences between prosodic categories
that correspond to differences in function and in meaning, but it is not yet entirely clear how this
mapping between prosodic categories and their meanings and functions works. One advantage
of developing an explicit system for transcribing the categories is that, when it is applied to
samples of natural speech, it provides both a stringent test of the theory behind the transcription
system, and a laboratory tool for determining how the theory needs to be extended and revised.
A challenging aspect of transcribing prosody is that there is a substantial level of variability. For
example, one important cue to prominence and phrasing is the intonation of an utterance, i.e.
changes in the pitch that are caused by changes in the frequency of vibration of the vocal folds,
often called f0. An f0 that is high in a speaker’s range on a salient syllable can mark a pitch | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
that is high in a speaker’s range on a salient syllable can mark a pitch
accent, but so can a lower (but still high-for-this-speaker) f0 on another salient syllable.
Moreover, a high f0 might mark a pitch accented word or it could mark a phrase boundary tone,
as in the pitch rise often heard on the final syllable of certain kinds of questions in English, like
Is it raining? In order to determine the appropriate ToBI transcription, the entire utterance must
be parsed, that is, understood as a whole, to determine how the high and low tones are
implemented. This tutorial will begin by introducing utterances with relatively unambiguous
ToBI annotations and will gradually move to utterances for which even experienced labellers
might disagree about how to transcribe some regions. This will be less disturbing if one
recognizes that ambiguous cues are also present in other types of speech annotation. For example,
the acoustic cues for a canonical /t/ might not be present, e.g. in some renditions of the word
butter. However, the existence of a well-established lexicon allows annotators to recognize
tokens produced with a flap (sounding more like a /d/) and tokens produced more carefully as
variations of a single lexical item, and to label them consistently. At this writing, the nature of
variation for prosodic categories (and thus for ToBI labels) is not fully understood, and this
results in some reasonable disagreement in prosodic parses in some renditions of some utterances,
particularly in spontaneous speech. ToBI labelling is a new endeavor compared to, say, phonetic
transcription, and as the nature and range of prosodic variation becomes better understood and
document | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
is a new endeavor compared to, say, phonetic
transcription, and as the nature and range of prosodic variation becomes better understood and
documented, we expect the transcription ambiguity to lessen. In the meantime, the system
provides a number of tools for recording your uncertainty, to help us understand better where the
system can be improved. In fact, although the development of a ToBI system for a particular
language requires a certain degree of prior understanding of how prominence and grouping work
in that language, a ToBI system can also be viewed as a tool for exploring the prosodic system of
a language in greater detail. In this tutorial we introduce you to the ToBI system that has been
developed for Mainstream American English; for information about ToBI systems for other
languages, and about the general ToBI framework within which these systems have been
developed, see Jun (2005), Prosodic Typology.
1.1. The basic parts of ToBI1
A ToBI transcription of an utterance consists minimally of a recording of the speech, its
fundamental frequency contour, and (in the transcription proper) symbolic labels for prosodic
events. The transcription proper is usually arranged in four time-aligned parallel horizontal
panels or tiers, so that the symbolic labels can be easily matched with the corresponding f0 track
and speech waveform. (Other tiers can be added for the needs of particular sites.) The four
labelling tiers each appear in their own window:
(1) the Tone tier, for transcribing tonal events
(2) the Orthographic tier, for transcribing words
(3) the Break-Index tier, for transcribing boundaries between words
(4) the Miscellaneous tier, for recording additional observations
In addition, two new tiers have been suggested for labellers who find them useful. These are not
used in the examples in the beginning of this tutorial, but will be explained in Section 2 | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
suggested for labellers who find them useful. These are not
used in the examples in the beginning of this tutorial, but will be explained in Section 2.12:
(5) an Alternative Tier, for transcribing alternative labels in the case of ambiguity
(6) a Discussion tier, for recording data for selected research issues
1 Parts of this section and section 1.2 have been adapted from the Guidelines for ToBI Labelling, version 3.0 (1997).
One popular program for labelling and displaying ToBI transcriptions is Praat (available at
http://www.praat.org ). Here is an example using Praat to display the utterance waveform, the f0
contour, the spectrogram and the first four tiers for the utterance Armani knew the millionaire.
Figure 1: armani1.wav with accompanying Praat TextGrid.
<armani1>
The topmost window in this display shows the waveform of the recorded utterance; later, when
you learn how to expand the horizontal time scale, you will see more clearly the vertical
striations that indicate the individual pitch pulses made by the vocal folds as they vibrate. The
horizontal axis corresponds to time and the vertical axis to the amplitude of the vibration. You
can see in the waveform that the amplitude varies roughly with the syllable structure of the
utterance: it is large for vowels (where the mouth is open relatively wide), smaller for consonants
(which have a constriction in the vocal tract), and zero (or nearly zero) when there is no speech
signal.
The rate of vibration of the vocal folds is what we hear as the pitch, and this is represented in the
second window as a semi-continuous blue line superimposed on a different representation of the
speech signal, the spectrogram (see “What is the spectrogram?” in the grey box below). When
you play this utterance, notice where you hear a higher pitch. Do these words and syllables
correspond to the places where the F0 contour (also called the pitch track | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
syllables
correspond to the places where the F0 contour (also called the pitch track or the f0 track,
although f0 and perceived pitch are not exactly the same thing) is higher, as indicated by the blue
lines?
What is the spectrogram?
The gray and white pattern in the background behind the f0 track is the spectrogram
which is another way of displaying information about the speech signal that you see in
the wave form above it. Like the wave form, it shows time on the horizontal axis, but
the vertical axis here is frequency. Among other things, the spectrogram shows the
frequencies that resonate especially well in the vocal tract as black horizontal bands
that appear and disappear, and rise and fall, as the speech articulators move around
inside the vocal tract, changing the size and shape of the resonant cavities inside the
mouth, nose and throat. This is much like the change in sound that happens when a
jug is filling up with water from a faucet: the pitch of the noise rises as the air cavity
above the water level gets smaller). The spectrogram often shows the change from a
consonant to a vowel especially clearly; notice the sharp boundaries between the /m/s
and /n/s and their surrounding vowels in Figure 1.
Experienced ToBI labellers often use the spectrogram to help find the location of
successive sounds, syllables and words. This in turn helps them to keep track of
where changes in the pitch track occur across the words and syllables of the utterance.
Learning how to use this information may take some time, so at the beginning you
may want to rely more on listening to make these judgments.
Underneath the spectrogram with the blue pitch contour is the set of four thinner white boxes that
make up the four tiers in the ToBI labelling window. Unlike the speech displays, these boxes are
text writeable and this is where you | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.