text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
equations of the system D - X= 0.
-
Each unknown x for which j is in J appears with a
j
-
Therefore, we can Itsolveufor each of these unknowns in terms of the remaining
unknowns xk , for k in K. Substituting these expressions for x , ..., x
into the n-tuple X = (xl,...,x ) , we see that the general solution of the ... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
0.0,1).
-
The same procedure we followed in this example can be followed in
general. Once we write X as a vector of which each component is alinear combination
of the xk , then we can write it as a sum of vectors each of which involves
only one of the unknowns 5 , and then finally as a linear combination, with
coef... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
is
Gauss-Jordan elimination.
Corollary&
Let A be a k by n matrix. If the rows of A are
independent, then the solution space of the system A-X = 0 has dimension n - k. a
-
Now we consider the case of a general system of linear equations, of the
form A'X = C . For the moment, we assume that the system has at least o... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
A be a k by n matrix. Let r equal the rank
of A'
(a) If r 4 k , then there exist vectors C in Vlc such that the
system A'X = C has no solution.
(b) If r = k, tl~enthe system A-X = C al~~ayshas a solution.
Proof. We consider the system A'X = C and apply elementary row
operations to both A . and C until we have bro... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
we shall consider the following two cases at the same
time: Either ( 1) B has no zero rows, or (2) whenever the ithrow of B is zero,
then the corresponding entry c * of C' is zero. We show that in either of
i
these cases, the system has a solution.
Let US consider the system B - X = C 1 ard apply further operations... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
the last equation of the system is
On the other hand, the system
does have,a solution. Fc~llowingthe procedure described in the preceding proof,
we solve for the unknowns xl,x2, and x4 as follows:
The general solution is thus the 2-plane in V5 specified by the parametric equation
Remark. Solving the system A-X = ... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
(b) Find conditions on a,b, and c that are necessary and sufficient for the
. [~int:What happens to
system B - X = C to have a solution, where C =
C when you reduce B to echelon form?]
5. Let A be the matrix of p. A20. Find conditions on a,b,c, and d
that are necessary and sufficient for the system A'X = C to have... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
This provides an.alternate proof of Theorem 6.)
@ Let A be a k by n tnztrix. The columns of A, when looked
at as elements of Vk , span a subspace of Vk that is called the column
space of A . The row space and column space of A are very different,
but it is a totally unexpected fact that they have the same dimension... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
determine m.)
The- process of "solving" the system of equations A S X = C that
we described in the preceding section is an algorithm for passing from a
cartesian equation for M to a parametric equation for M. 0r.e can ask
whether there is a process for the reverse, for passing from a parametric
equation for M to a... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
a k by n matrix A with independent rows
Ai , whence w L is the solution space of the system A - X = 0.
The space (WL)'
has dimension n - (n - k) , by what we just proved.
And it contdns each vector
Ai
(since Ai *X= -0 for each X in W I.)
Therefore it equals the space spanned by
, a
Theorem
Suppose a k-plane M... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
only lines
(1-planes) and planes (2-planes) to deal with. A P ~we can use either the
parametric or cartesian form for lines and planes, as we prefer. However,
in this situation we tend to prefer:
parametric form for a line, and
cartesian form for a plane.
Let us explain why.
If L is a line given in parametric fo... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
P2' p3) with
normal vector N = (a a2, a3 1.
WE have thus proved the first half of the following theorem:
Theorem% If M is a 2-plane in V3, then M has a cartesian
equation of the form
a x + a x + a x - b ,
1 1 2 2 3 3 -
where N = (al, a2, a3) is non-zero. Conversely, any such equation is
the cartesian equation of ... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
- X = C of three equations in three unknowns.
The rows of A are the normal vectors. The solution space of the system
(which consists of the points common to all three planes) consists of a
a single point if and only if the rows of A are independent. a
Theorem 11. Two non-parallel planes in V3 intersect in a straigh... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
N-P # b. Thus the intersection of L and M is either all of L, or
it is empty.
On the other hand, if L is not parallel to M, then N - A # 0.
In this case the equation can be solved uniquely for t. Thus the intersection
of L and M consists of a single point. a
r Ex-ample 5. Ccnsider the plane M = M(P;A,B) in V3 , wh... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
P and Q is coniained in M.
\
5. Write a parametric equation for the line of intersection of the
planes of Exercise 3.
6. Write a cartesian equation for the plane through P = (-1,0,2)
and Q = (3,1,5) that is parallel to the line through R = (1,1,1)with
direction vector A = (1,3,4).
7. Write cartesian equations fo... | https://ocw.mit.edu/courses/18-024-multivariable-calculus-with-theory-spring-2011/224273817cd62ecaecef64bf6c607878_MIT18_024s11_ChB1notes.pdf |
§ 1. Information measures: entropy and divergence
Review: Random variables
• Two methods to describe a random variable (R.V.) X:
1. a function X Ω
∶
→
X
2. a distribution
PX
on
from the probability space Ω,
some measurable space (X ,
F
)
.
( F
)
, P to a target space
X
.
• Convention: capital letter – RV (e.g. X); smal... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
= E[ log
i.e., the entropy of H(PX
∣Y =y) averaged over PY .
Note:
8
1
(
PX Y X Y
∣ )
∣
]
,
• Q: Why such definition, why log, why entropy?
Name comes from thermodynamics. Definition is justified by theorems in this course (e.g.
operationally by compression), but also by a number of experiments. For example, we can
measu... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
1, 2, . . .}
P[X = i] = Px(i) = p ⋅ (p)i
H(X) =
p ⋅ pi log
∞
∑
i=0
= log
1
p
+ p ⋅ log
ppi(i log
1
p
+ log
)
1
p
1
p ⋅ pi
1
p
⋅
=
∞
∑
i=0
1 − p
p2
=
h(p)
p
Example (Infinite entropy): Can H(X) = +∞? Yes, P[X = k] =
c
k ln2 k , k = 2, 3, ⋯
9
011/2Review: Convexity
• Convex
∈ [
all α
set: A subset S of some vector space ... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
inequality: For any S-valued random variable X
(
⇒
– f is convex
– f is strictly
) ≤
f EX Ef X
⇒ f (EX
)
conv
=
unless X is a constant (X E
ex
(
Ef (X)
f (EX)
)
)
<
X
Ef (X
a.s.)
Famous puzzle: A man says, ”I am the average height and average weight of the
population. Thus, I am an average man.” However, he is still co... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
−1) ≤
n
∑
1
i
=
H
(Xi),
n
∑
1
i
=
e
↑ quality iff X1, . . . , Xn mutually independent
(1.1)
(1.2)
Proof. 1. Expectation of non-negative function
2. Jensen’s inequality
3. H only depends on the values of PX , not locations:
H(
) = H(
)
4. Later (Lecture 2)
1
= E[ log
5. E log
PXY (X,Y )
))
(
6. Intuition: X
, f X contain... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
value of X is always between H X bits and
this
X
. In
ility in half.
special case the above scheme is optimal because (intuitively) it always splits the
] =
[
/
1 2, P X b
w for su
ceed by asking “X b?”. If not, ask “X c?”, after which we will kno
/ ×
+
/
+ / ×
/
1 2 1 4 2 1 8 3 1 8 3
of
minimal
)
= ] =
=
questions
1 4... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
1 − p) → 0 as p → 0.
e) Subadditivity: H(X, Y ) ≤ H(X) + H(Y )
(
Hn q1, . . . , qn whenever
n
j=1 rij
f) Additivity: H(X, Y ) = H(X)+H(
∑
)
=
.
pi and m
Equivalently, Hmn r11, . . . , rmn Hm p1, . . . , pm
∑
i=1 rij
= qj.
) ≤
(
(
) +
Y ) if X ⊥⊥ Y . Equivalently, H p1q1, . . . , pmqn Hm(p1, . . . , pm
mn(
) ≤
)+
)
Hn q... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
returns to its original state periodically), produces useful work and whose only
other effect on the outside world is drawing heat from a warm body. (That is, every
such machine, should expend some amount of heat to some cold body too!)1
Equivalent formulation is: There does not exist a cyclic process that transfers hea... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
is that the mysterious entropy did not have any formula for it (unlike
say energy), and thus had to be computed indirectly on the basis of relation (1.3). This was changed
with the revolutionary work of Boltzmann and Gibbs, who showed that for a system of n particles
the entropy of a given macro-state can be computed a... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
.
This follows from a simple chain
H(A, B, C) + H(B) =
≤
=
H(A, C∣B) + 2H B
( )
)
A∣B) + H(
H(
(
) + (
H
A, B H B
∣
C B
, C
+ 2H(B)
)
(1.4)
(1.5)
(1.6)
Note that entropy is not only submodular, but also monotone:
⊂
T1 T2
(cid:212)⇒
) ≤
H XT1 H XT2
(
(
)
.
n, let us denote by Γn the
So fixing
submodular set-functions
[ ]... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
∣X1) − I(X3; X4∣X2) ≤
1
2
I(X1; X2) +
1
4
I(X1; X3, X4) +
1
4
I(X2; X3, X4
) .
(see Definition 2.3).
1.1.4 Entropy: Han’s inequality
Theorem 1.3 (Han’s inequality). Let X n be discrete n-dimensional RV and denote Hk(X n
1
n
)
(
k
¯
) =
¯
H
H(XT ) – the average entropy of a k-subset of coordinates. Then k
k is decreasing... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
{
1, .
. . , n
} to get
¯Hk+1 + ¯Hk
−1 ≤ ¯2Hk
as claimed by (1.11).
Alternative proof: Notice that by “conditioning decreases entropy” we have
)
H(Xk+1∣X1, . . . , Xk) ≤ H(Xk 1 X2, . . . , Xk .
+ ∣
Averaging this inequality over all permutations of indices
yields (1.11
).
14
Note: Han’s inequality holds for any submod... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
(dx) ∫
Y
PY ∣X=x(dy)1{(x, y) ∈ E} .
Intuition: D(P ∥Q) gauges the dissimilarity between P and Q.
Definition 1.4 (Divergence). Let P, Q be distributions on
• A = discrete alphabet (finite or countably infinite)
D(P ∥Q) ≜
P (a)
log
∑
a
∈A
(a)
P
)
Q(a
,
where we agree:
(1) 0 ⋅
log
0
0
=
0
(2)
∃a ∶ Q(a) = 0, P (a) > 0 ⇒ D(P ∥... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
change of measure]
Such f is called a density (or
Radon-Nik
For finite alphabets, we can just take dP x
possessing pdfs we can take dP
dQ (x) to be
a
the
ratio of pdfs.
odym derivative) of P w.r.t. Q, denoted by dP
dQ .
n
dQ ( ) to be the ratio of the pmfs. For P and Q on R
• (Infinite values) D(P ∥Q) can be ∞
are
consis... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
log e
∥ )
D P Q .
(
(1.14)
• (Other divergences) A general class of divergence-lik
e measures
was proposed by Csisz´ar.
Fixing a convex function f ∶ R+ → R with f (1) = 0 we define f -divergence Df as
Df (P ∥Q) ≜ EQ [f (
dP
dQ
)] .
16
(1.15)
This encompasses total variation, χ2-distance, Hellinger, Tsallis etc. Inequal... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
= C. The pdf of Nc(m, σ2)
is
1
σ2
π
2
−∣ m∣ /
e x
−
2
σ
, or equivalently:
N
c m, σ2
(
) = N
([
Re m Im m ,
( )] [
( )
σ2/2
0
0
σ2
/2
])
D
(Nc( 1
m , σ2
1)∥Nc
( 0
m
, σ2
0)) =
log
σ2
0
σ2
1
+ [
∣m1 − m0∣2
σ2
0
+
σ2
1
2
σ
0
]
− 1 log
e
Example (Vector Gaussian):
A =
k
C
D(Nc(m1, Σ1
)∥Nc(m0, Σ0)) = log det
1 + (
m1 − m0)... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
X k is
h(X k) = h
(PX k
) ≜ −D(PX k ∥Leb).
(1.19)
k has probabilit
particular, if X
In
wise h(X k) = −∞. Conditional differential entropy h(X k∣Y ) ≜ E log
conditional pdf.
y density function
(pdf) p, then h(X k) = E log
1
p(X k)
; other-
pX ∣Y is a
k
where
1
kX Y (X ∣Y )
k
∣
p
Warning: Even for X with pdf h X can be po... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
Theorem 1.6 (Bollob´as-Thomason Box Theorem). Let K ⊂ R
denote by
of K on the
K
S
⊂ [
=
}
{
s.t. Leb A
– pr
ojection
}
{
Leb K and for all S
subset
]
n :
⊂ [
S of coordinate axes. Then there exists a rectangle
n be a compact set. For S
]
n
A
2For an example, consider piecewise-constant pdf taking value e(−1)nn on the n... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
24)
Proof. Apply previous theorem to construct rectangle A and note that
Leb{
K} = Leb{A} =
n
∏
=1
j
By previous theorem Leb{Ajc} ≤ Leb{Kjc}.
Leb
{Ajc} n−1
1
The meaning of Loomis-Whitney
{ }
.K cj }
of K in direction j: wj ≜
Leb K
Leb{
Then (1.24) is equiv
alen
t to
inequality is best understood by introducing the ave... | https://ocw.mit.edu/courses/6-441-information-theory-spring-2016/2243edffb30f57181ed97dcb77691580_MIT6_441S16_chapter_1.pdf |
15.093J Optimization Methods
Lecture 4: The Simplex Method II
Slide 1
Slide 2
Slide 3
Slide 4
1 Outline
• Revised Simplex method
• The full tableau implementation
• Finding an initial BFS
• The complete algorithm
• The column geometry
• Computational efficiency
2 Revised Simplex
Initial data: A, b, c
1. St... | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf |
−1
−1
0
1
1 0 0
1 −1 0 0 −1
1
1 1 0
1
1 0 1
1 0 −1 0
1 0
0 0
1 0
−1 1
0 0 −1 1
2.2 Practical issues
• Numerical Stability
B−1 needs to be computed from scratch once in a while, as errors accu
mulate
• Sparsity
B−1 is represented in terms of sparse triangular matrices
3 Fu... | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf |
x3 =
x1 =
x6 =
10
0
10
0
1.5
1 −0.5
1 −1
0 −1
1
0
2.5*
0
1 −1.5
0
0
0
1
x4
2
0
1
x1 x2 x3
x4
3.6
0.4
x5
1.6
x6
1.6
0.4 −0.6
0
1
0 −0.6
0.4
0
0.4 −0.6
0.4
0.4
136
4
4
4
x3 =
x1 =
x2 =
0
0
1
0
0
0
0
1
x 3
.
= (
Slide 14
Slide 15
= (
)
= (4,4,4)
.
.
.
= (
)
x 1
.
= (
)... | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf |
, x, s ≥ 0
s = b, x = 0
5.1 Artificial variables
1. Multiply rows with −1 to get b ≥ 0.
Ax = b, x ≥ 0
2. Introduce artificial variables y, start with initial BFS y = b, x = 0, and
apply simplex to auxiliary problem
y1 + y2 + . . . + ym
min
s.t. Ax + y = b
x, y ≥ 0
3. If cost > 0 ⇒ LOP infeasible; stop.
4. If co... | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf |
. , ym, if necessary, and apply the simplex method to min
3. If cost> 0, original problem is infeasible; STOP.
m
i=1 yi.
�
5
�
4. If cost= 0, a feasible solution to the original problem has been found.
5. Drive artificial variables out of the basis, potentially eliminating redundant
rows.
Phase II:
1. Let the fina... | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf |
cn
�
=
�
�
b
z
�
Slide 22
Slide 23
Slide 24
Slide 25
Slide 26
Slide 27
6
z
z
4
B
.
.
.
b .
initial basis
6
.
.
1
next basis
7
.
.
5
.
2
3
.
.
.
8
optimal basis
.
b
7
D
E
F
C
G
H
I
9 Computational efficiency
Exceptional practical behavior: linear in n
Worst case ... | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/227bd50822ccfd66c653bccf0a3e11fe_MIT15_093J_F09_lec04.pdf |
- 2.12 Lecture Notes -
H. Harry Asada
Ford Professor of Mechanical Engineering
Fall 2005
Introduction to Robotics, H. Harry Asada
1
Chapter 1
Introduction
Many definitions have been suggested for what we call a robot. The word may conjure up various
level... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
. batch size
Department of Mechanical Engineering
Massachusetts Institute of Technology
Introduction to Robotics, H. Harry Asada
2
issue in manufacturing innovation for a few decades, and numerical control has played a central
role in increasing system flexibility. Contemporary industrial robots are progr... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
those of numerical control
Department of Mechanical Engineering
Massachusetts Institute of Technology
Introduction to Robotics, H. Harry Asada
3
and remote manipulation. Thus a widely accepted definition of today’s industrial robot is that of a
numerically controlled manipulator, whe... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
robots are multi-input spatial mechanisms which require more sophisticated analytical tools.
The dynamic behavior of robot manipulators is also complex, since the dynamics of multi-input
spatial linkages are highly coupled and nonlinear. The motion of each joint is significantly
affected by the motions of all the ot... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
6 Medical robots for minimally invasive surgery
systems, where the human operator takes the decisions and applies control actions. The operator
interprets a given task, finds an appropriate strategy to accomplish the task, and plans the
procedure of operations. He/she devises an effective way of achieving the goal on... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
tasks themselves.
In order to devise appropriate arm mechanisms and t
develop effective control algorithms, we need to
precisely understand how a given task should be
accomplished and what sort of motions the robot
should be able to achieve. To perform an assembly
operation, for example, we need to know how to
gu... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
omotion and navigation are increasingly important, as robots find challenging applications in
the field. This opened up new research and development areas in robotics. Novel mechanisms are
needed to allow robots to move through crowded areas, rough terrain, narrow channels, and even
staircases. Various types of legg... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
and experienced
engineers with fundamentals of understanding robotic tasks and intelligent behaviors as well as
with enabling technologies needed for building and controlling robotic systems.
Sensor Data
Map Building
Source: JPL
Robot Control
Location Estimation
Figure 1-10 JPL’s planetary exploration robot: an... | https://ocw.mit.edu/courses/2-12-introduction-to-robotics-fall-2005/228dbf33cc6dc0f84c0d87e00b31a410_chapter1.pdf |
Deep Learning/Double Descent
Gilbert Strang
MIT
October, 2019
1/32
Number of Weights
2/32
N = 40
N = 4000
10
8
6
4
2
0
-2
-4
-6
-3
-2
-1
0
x
1
2
3
3/32
Fit training data by a Learning function F
We are given training data : Inputs v, outputs w
Example Each v is an image of a number
w = 0, 1, . . . , 9
The vector v d... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
descent to find x
Backpropagation to compute grad F
Layer k vk = Fk(vk−1) = ReLU (Akvk−1 + bk)
Weights for layer k Ak = matrix and bk = offset vector
v0 = training data / v1, . . . , vℓ−1 hidden layers / vℓ = output
5/32
Deep Neural Networks
1 Key operation
2 Key rule
3 Key algorithm
4 Key subroutine
5 Key nonlinearity ... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
12)
(cid:12)
x, vj
0
(cid:16)
“Square loss” = error ℓ
Cross-entropy loss, hinge loss,. . .
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
Classification problem : true = 1 or
(cid:17)
(cid:16)
1
−
Regression problem : true = vector
Gradient descent xk+1 = arg min
||
Stochastic descent xk+1 = arg min
xk
sk
L(xk)
−... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
vk
0
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:0)
(cid:1)(cid:12)
(cid:12)
(cid:12)
(cid:12)
7/32
Key Questions
1. Optimization of the weights x = Ak and bk
2. Convergence rate of descent and accelerated descent
(when xk+1 depends on xk and xk−1 : momentum added)
3. Do the weights A1, b1 . . . generalize to unseen test... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
optimizes weights Ak, bk
2. Backpropagation in the computational graph computes
derivatives with respect to weights x = A1, b1, . . . , Aℓ, bℓ
3. The learning function F (x, v0) = . . . F3(F2(F1(x, v)))
F1(v0) = max (A1v0 + b1, 0) = ReLU
affine map
◦
F (v) is continuous piecewise linear : how many pieces?
This measures t... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
by the N hyperplanes
r(N, m) =
m
i=0 (cid:18)
X
N
i
=
(cid:19)
(cid:18)
N
0
N
1
+
(cid:19)
(cid:18)
+
· · ·
+
(cid:19)
(cid:18)
N
m
(cid:19)
N = 3 folds in a plane will produce 1 + 3 + 3 = 7 pieces
Recursion r(N, m) = r(N
1, m) + r(N
1, m
1)
−
−
−
10/32
v0 has m components / v1 has N components / N ReLU’s
The number o... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
(N, m) = r(N
1, m) + r(N
1, m
1)
−
−
−
10/32
v0 has m components / v1 has N components / N ReLU’s
The number of flat regions in Rm bounded by the N hyperplanes
r(N, m) =
m
i=0 (cid:18)
X
N
i
=
(cid:19)
(cid:18)
N
0
N
1
+
(cid:19)
(cid:18)
+
· · ·
+
(cid:19)
(cid:18)
N
m
(cid:19)
N = 3 folds in a plane will produce 1 + ... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
−
−
10/32
F (x) = F2(F1(x)) is continuous piecewise linear
One hidden layer of neurons : deep networks have many more
Overfitting is not desirable ! Gradient descent stops early !
“Generalization” measured by success on unseen test data
Big problems are underdetermined [# weights > # samples]
Stochastic Gradient Descen... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
0
x1
0
x1
0
0
0
0
0
0
x−1
x0
x1
0
0
x−1
x0
0
0
0
x−1
A =
N + 2 inputs and N outputs
Each shift has a diagonal of 1’s
A = x1L + x0C + x−1R
∂y
∂x1
= Lv
∂y
∂x0
= Cv
∂y
∂x−1
= Rv
12/32
Convolutional Neural Nets (CNN)
x1 x0 x−1
x0
x1
0
x1
0
0
0
0
0
0
x−1
x0
x1
0
0
x−1
x0
0
0
0
x−1
A =
N + 2 ... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
’s
A = x1L + x0C + x−1R
∂y
∂x1
= Lv
∂y
∂x0
= Cv
∂y
∂x−1
= Rv
12/32
Convolutions in Two Dimensions
Weights
x01
x00
x11
x10
x1−1 x0−1 x−1−1
x−11
x−10
vij i, j from (0, 0) to (N +1, N +1)
Input image
Output image yij i, j from (1, 1)to (N, N )
Shifts L, C, R, U, D = Left, Center, Right, Up, Down
A convolution is ... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
Weights
x01
x00
x11
x10
x1−1 x0−1 x−1−1
x−11
x−10
vij i, j from (0, 0) to (N +1, N +1)
Input image
Output image yij i, j from (1, 1)to (N, N )
Shifts L, C, R, U, D = Left, Center, Right, Up, Down
A convolution is a combination of shift matrices = filter = Toeplitz matrix
The coefficients in the combination will be... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
(0, 1
−
1
X
t F (x)) for classification
3 Cross-entropy loss L(x) =
1
N
−
N
1
X
[yi log
yi + (1
yi) log (1
yi)]
−
−
b
b
15/32
Here are three loss functions—Cross-entropy is a favorite loss function for
neural nets
1 Square loss L(x) =
2 Hinge loss L(x) =
1
N
1
N
N
F (x, vi)
1 ||
X
N
true
2 : sum over samples vi
||
−
ma... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
Learning rate
∇
The first descent step starts out
perpendicular to the level set. As
it crosses through lower level sets,
the function f (x, y) is decreasing.
Eventually its path is tangent to
a level set L.
Slow convergence on a zig-zag path to the minimum of f = x2 + by2.
17/32
Momentum and the Path of a Heavy Ball
D... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
(cid:21)
It seems a miracle that this problem has a beautiful solution.
The optimal s and β are
−
−
(cid:21)(cid:20)
s dk
β dk (cid:20)
−
1 0
λ 1
ck+1
dk+1
= ck
1
0
−
s
β
ck
dk (cid:21)
(cid:21)(cid:20)
s =
2
√λmax +
2
λmin !
p
and
β =
√λmax
−
√λmax +
2
λmin
λmin !
p
p
18/32
Momentum and the Path of a Heavy Ball
D... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
1 + √b !
−
Steepest
descent
.99
1.01
2
(cid:19)
(cid:18)
= .96
Accelerated
descent
2
.9
1.1
(cid:18)
(cid:19)
= .67
Notice that λmax/λmin = 1/b = κ is the condition number of S
19/32
Key difference : b is replaced by √b
Ordinary
descent factor
2
1
b
−
1 + b !
Accelerated
descent factor
2
√b
1
1 + √b !
−
Steepest
des... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
points
20/32
Stochastic Gradient Descent
Stochastic gradient descent uses a “minibatch” of the training data
Every step is much faster than using all data
We don’t want to fit the training data too perfectly (overfitting)
Choosing a polynomial of degree 60 to fit 61 data points
20/32
Stochastic Gradient Descent
Stochast... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
toward the solution x∗
Here we pause to look at semi-convergence : Fast start by stochastic
gradient descent
Convergence at the start changes to large oscillations near the solution
Kaczmarz for Ax = b with random i(k) xk+1 = xk +
bi
aT
i xk
2 ai
−
ai
||
||
21/32
Stochastic Descent Using One Sample Per Step
Early step... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
s2
k−1 + (1
β)
−
||∇
L (xk)
2
||
Why do the weights generalize well to unseen test data ?
22/32
Computation of ∂F/∂x : Explicit Formulas
vL = bL + ALvL−1
or simply w = b + Av.
The output wi is not affected by bj or Ajk if j
= i
Fully connected layer
Independent weights Ajk
∂wi
∂bj
= δij
and
∂wi
∂Ajk
= δijvk
Example
∂w1... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
v2
(cid:20)
(cid:21)
= v1,
∂w1
∂a12
= v2,
∂w1
∂a21
=
∂w1
∂a22
= 0.
23/32
6
Computation of ∂F/∂x : Explicit Formulas
vL = bL + ALvL−1
or simply w = b + Av.
The output wi is not affected by bj or Ajk if j
= i
Fully connected layer
Independent weights Ajk
∂wi
∂bj
= δij
and
∂wi
∂Ajk
= δijvk
Example
∂w1
∂b1
= 1,
(cid:20)
w1... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
v1 + a12v2
a21v1 + a22v2
(cid:20)
(cid:21)
= v1,
∂w1
∂a12
= v2,
∂w1
∂a21
=
∂w1
∂a22
= 0.
23/32
6
Computation of ∂F/∂x : Explicit Formulas
vL = bL + ALvL−1
or simply w = b + Av.
The output wi is not affected by bj or Ajk if j
= i
Fully connected layer
Independent weights Ajk
∂wi
∂bj
= δij
and
∂wi
∂Ajk
= δijvk
Example
∂w... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
ivariable chain rule ?
Which order (forward or backward along the chain) is faster ?
24/32
Backpropagation and the Chain Rule
L(x) adds up all the losses ℓ (w
true) = ℓ (F (x, v)
true)
−
−
The partial derivatives of L with respect to the weights x should be
zero.
Chain
rule
d
dx
(F3(F2(F1(x)))) =
dF3
dF2
(cid:18)
(F2(... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
2
dF1
(F1(x))
(cid:19)(cid:18)
dF1
dx
(x)
(cid:19)
What is the multivariable chain rule ?
Which order (forward or backward along the chain) is faster ?
24/32
Backpropagation and the Chain Rule
L(x) adds up all the losses ℓ (w
true) = ℓ (F (x, v)
true)
−
−
The partial derivatives of L with respect to the weights x shou... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
2w) needs only N 2+N 2
Forward (((M1M2)M3) . . . ML)w needs (L
1)N 3 + N 2
−
Backward M1(M2(. . . (MLw))) needs LN 2
25/32
The Multivariable Chain Rule
∂w
∂v
=
∂w1
∂v1
∂wp
∂v1
· · ·
· · ·
· · ·
∂w1
∂vn
∂wp
∂vn
∂v
∂u
=
∂v1
∂u1
∂vn
∂u1
· · ·
· · ·
· · ·
∂v1
∂um
∂vn
∂um
... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
∂wp
∂v1
· · ·
· · ·
· · ·
∂w1
∂vn
∂wp
∂vn
∂v
∂u
=
∂v1
∂u1
∂vn
∂u1
· · ·
· · ·
· · ·
∂v1
∂um
∂vn
∂um
∂wi
∂uk
=
∂wi
∂v1
∂v1
∂uk
+
· · ·
+
∂wi
∂vn
∂vn
∂uk
=
∂wi
∂v1
(cid:18)
, . . . ,
∂wi
∂vn (cid:19)
···
(cid:18)
∂v1
∂uk
, . . . ,
∂vn
∂uk (cid:19)
Multivariable chai... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
is too large Overshooting the best choice xk+1 in the descent direction
Cross-validation Divide the available data into K subsets
27/32
Hyperparameters : The Fateful Decisions
The words learning rate are often used in place of stepsize
sk is too small Then gradient descent takes too long to minimize L(x)
sk is too lar... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
variance of the error (overfitting)
Large λ : increase the bias (underfitting),
b
||
−
Ax
2 is less important
||
Deep learning with many extra weights and good hyperparameters will find
solutions that generalize, without penalty
28/32
Softmax Outputs for Multiclass Networks
Softmax
pj =
1
S
ewj where S =
ewk
n
Xk=1
Softm... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
(v) is continuous there exists x so that
F (x, v)
|
−
f (v)
|
< ǫ for all v
Accuracy of approximation to f
min
x ||
F (x, v)
f (v)
|| ≤
C
f
||
S
||
−
Deep networks give closer approximation than splines or shallow nets
30/32
Neural Nets Give Universal Approximation
If f (v) is continuous there exists x so that
F (x, v... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
v) has folds along
N hyperplanes H1, . . . , HN . Those come from N linear equations
aT
i v + bi = 0, in other words ReLU at N neurons. F has r(N, m) linear
pieces :
r(N, m) =
P
m
i=0
N
i
(cid:18)
=
(cid:19)
(cid:18)
N
0
+
(cid:19)
(cid:18)
N
1
(cid:19)
+
· · ·
+
N
m
(cid:18)
(cid:19)
r(N, m) = r(N
1, m) + r(N
1, m
1)
... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
3b
r(2, 1) = 3
←
31/32
Continuous Piecewise Linear Function
How many linear pieces with more layers ?
Now ReLU is folding piecewise linear functions
Hanin-Rolnick : Still r(N, m)
≈
cN m pieces from N neurons
32/32
Continuous Piecewise Linear Function
How many linear pieces with more layers ?
Now ReLU is folding piece... | https://ocw.mit.edu/courses/18-085-computational-science-and-engineering-i-summer-2020/22a453f2f41f9c34ad274b7d7da9a0aa_MIT18_085Summer20_lec_GS.pdf |
OpencourseWare
2.12 Uncertainty in labelling
9 August 2006
While many stretches of speech will be straightforward to label with ToBI, many others
may prove more challenging to the labeller. Naturally-produced speech contains an
enormous amount of variation, and there are cases where a labeller may be uncertain
wh... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
alt tier, a recent addition to the ToBI system, allows the labeller to indicate places
where more than one label was seriously considered. There may be times when a labeller
spends a comparatively long time determining which of two possible labels to use for a
particular tone or break. The alternatives tier allows a... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
.2, contains
two instances where the labeller was not certain about which pitch accent label to use:
first on the syllable –deed of indeed, and then later in the file, on the monosyllabic word
clouds. In the first full intonational phrase of the file (shown in Figure 2.12.1) with the
words and indeed, there is a st... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
the second Intonational Phrase of the file <indeed>, shown in full in Figure 2.12.2,
below, the labeller has used L+H*? on the word clouds. In this case, the second option
considered by the labeller was L+!H* (where the bitonal has a downstepped High). In
this case, the labeller perceived the pitch height of the pea... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
in the tones tier, and
both L+H* and H* in the alt tier. The labeller should align a single point in the alt tier
with the X*? in the tones tier, and list both alternatives at that point, separated by a
“pipe” (“|”) symbol, eg. “H*|L+H*”.
Table 2.12.2 Commonly confusable pairs of pitch accents
H* vs L+H*:
The pit... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
H+!H* vs !H*:
The labeller is uncertain whether a peak leading to the !H* prominence is associated with
that pitch accent, or if it is associated with some preceding High tone pitch accent (such
as a late peak after a H*).
2.12.4 Uncertainty about whether or not a syllable is pitch-accented
There will be cases of ... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
er has used the *? label on the word
how in the second intermediate phrase, for how long in that file. In this example, the
labeler’s use of the L* pitch accent symbol on the word long shows that she was certain
that the word was pitch accented, and that the pitch patterns were best captured by the
Low pitch accent... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
also contains a pitch accent. This can happen when the syllable in question occurs during
a stretch of high pitch between two H* pitch accents, or comparably during a stretch of
low pitch between two L* pitch accents.
The example <marmalade7> shows an example of a *? label in a region between two L*
pitch accents. ... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
):
• after one or more !H* pitch accents, such that the local pitch range has been greatly
compressed
• when the speaker is generally using reduced pitch range
• when there is a phrase final full-vowel syllable, especially a monosyllabic word, where
the duration cues to accent and boundary may be confounded.
• a ... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
with the “-” diacritic. So, uncertainty between break
index level 4 and level 3 can be indicated with the break index of “4-”. It is not necessary
to use the alt tier for such cases, as the two alternative labels are understood to be 4 vs. 3.
In such cases, the labeller must mark the phrase accent and boundary tone ... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
alternatives
the labeller preferred. For cases where the labeller wishes to indicate a strong preference
for one break label over the other alternative, the labeller may make use of the alt tier.
Figure 2.12.16, below, shows the same file, <sure>, with the alt tier used instead of the
“minus” diacritic. In this cas... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
example <diagonal>, the labeller indicated uncertainty
about which boundary tone best would capture the tone patterns at the end of the final full
intonational phrase of the file, realized on the word mountain. Here, the labeller was
uncertain that the appropriate phrase accent was H-, but less certain about the app... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
uncertainty markers, which can be used where the labeller has
no “best guess” about which phrase accent or boundary tone to use: X-? and X%?
respectively. These should be used only as a “last resort,” such as in cases where the
recording quality is very poor.
Summary of ToBI labels introduced so far:
Tones:
H*: h... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
is definitely pitch-accented, but the labeller is unable to determine
or decide which type of pitch accent
used in the tones or breaks tier to indicate that the Alternatives (alt) tier has
been used for a particular label
used after a break index (on the breaks tier) number to indicate uncertainty
between two leve... | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/22ab414fc96a65b7bce85d1e08bc31b7_chap2_12.pdf |
The 1-D Wave Equation
18.303 Linear Partial Differential Equations
Matthew J. Hancock
Fall 2006
1 1-D Wave Equation : Physical derivation
Reference: Guenther & Lee
1.2, Myint-U & Debnath
2.1-2.4
§
§
[Oct. 3, 2006]
We consider a string of length l with ends fixed, and rest state coinciding with
x-axis. The string... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
x and x + Δx. Let T (x, t) be tension
and θ (x, t) be the angle wrt the horizontal x-axis. Note that
tan θ (x, t) = slope of tangent at (x, t) in ux-plane =
∂u
∂x
(x, t) .
(1)
1
Newton’s Second Law (F = ma) states that
F = (ρΔx)
∂2u
∂t2
(2)
where ρ is the linear density of the string (M L−1) and Δx is the... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
u
∂x
(x + Δx, t)
= τ
−
−
tan θ (x, t))
∂u
∂x
(x, t)
.
(cid:18)
Substituting F from (2) into Eq. (4) and dividing by Δx gives
(cid:19)
ρ
∂2
u
∂t2
(ξ, t) = τ ∂x (x + Δx, t)
Δx
∂u
∂u
− ∂x (x, t)
for ξ
∈
[x, x + Δx]. Letting Δx
→
0 gives the 1-D Wave Equation
∂2u
∂t2
= c
2 ∂2u
,
∂x2
τ
2
c = > 0.
... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
a force balance at either x = 0 or x = 1 gives
T sin θ = mg
(6)
In other words, the vertical tension in the string balances the mass of the cylinder.
But τ = T cos θ = const and tan θ = ux, so that (6) becomes
Rearranging yields
τ ux = T cos θ tan θ = mg
ux =
mg
,
τ
x = 0, 1
These are Type II BCs. If the stri... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
2
∂3u
∂t3
(x, 0) +
t3
3!
· · ·
From the initial conditions, u (x, 0) = f (x), ut (x, 0) = g (x) and the PDE gives
utt (x, 0) = c 2 uxx (x, 0) = c 2f ′′ (x) ,
∂3u
∂t3
(x, 0) = c utxx (x, 0) = c g (x) .
2 ′′
2
Higher order terms can be found similarly. Therefore, the two initial conditions for
u (x, 0) and u... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
L∗
,
tˆ =
t
T∗
,
From the chain rule,
u x, ˆ
ˆ ˆ t =
(cid:0)
(cid:1)
u (x, t)
L∗
,
fˆ(ˆ
x) =
f (x)
L∗
,
g (ˆ =
ˆ x)
T∗g (x)
L∗
.
∂u
∂x
= L∗
∂uˆ ∂xˆ
∂xˆ ∂x
=
∂uˆ
∂xˆ
,
∂u
∂t
= L∗
∂uˆ ∂tˆ
∂tˆ ∂t
=
L∗ ∂uˆ
T∗ ∂tˆ
and similarly for higher derivatives. Substituting the dimensionless variables into 1... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
Problem is, from (10) – (12),
PDE :
utt = uxx,
0 < x < 1
BC :
IC :
u (0, t) = 0 = u (1, t) ,
t > 0,
u (x, 0) = f (x) ,
ut (x, 0) = g (x) ,
0 < x < 1
2 Separation of variables solution
(10)
(11)
(12)
(13)
(14)
(15)
Ref: Guenther & Lee
4.2, Myint-U & Debnath
6.2, and
7.1 – 7.3
§
§
§
Substituting u (x... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
of (18) are
2
λn = (nπ) ,
Xn (x) = bn sin (nπx) ,
n = 1, 2, 3, ...
The function T (t) satisfies
′′ T + λT = 0
and hence each eigenvalue λn corresponds to a solution Tn (t)
Tn (t) = αn cos (nπt) + βn sin (nπt) .
Thus, a solution to the PDE and BCs is
un (x, t) = (αn cos (nπt) + βn sin (nπt)) sin (nπx)
where we ha... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
sin (nπx) dx
1
g (x) sin (nπx) dx
5
decaying exponentials e
Note: The convergence of this series is harder to show, because we don’t have
2π2t in the sum terms (more later).
Note: Given BCs and an IC, the wave equation has a unique solution (Myint-U
−n
& Debnath
6.3).
§
3
Interpretation - Normal mod... | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/22ead9d70b36836a68d13c7393e19649_waveeqni.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.