text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
, CP=5, each symbol 2*32+5=69 samples
Exactly 1/8 of downstream
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
23
... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
/2
3/4
1
3/4
3/2
9/4
b
24
36
48
72
96
144
192
216
Broadcast channel – can’t optimize bit allocation
Figure by MIT OpenCourseWare.
FCC demands flat spectrum so no energy-allocation
The only knob is data rate selection
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
r
l
e
a
v
e
r
i
D
e
n
t
e
r
l
e
a
v
e
r
M
a
p
p
e
r
D
e
m
a
p
p
e
r
P
i
l
o
t
i
n
s
e
r
t
i
o
n
C
h
a
n
n
e
l
e
s
t
i
m
a
t
o
r
F
F
T
/
I
F
F
T
C
y
c
l
i
c
p
r
e
f
i
x
S
y
n
c
h
r
o
n
z
e
r
i
i
W
n
d
o
w
n
g
i
R
e
m
o
v
e
p
r
e
f
i
x
D
A
C
A
G
C
&
A
D
C
U
p
c
o
n
v
e
r
t
L
N
A
&
D
o
w
n
c
o
n
v
e
r
t
Cite as: Vladimir ... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
anovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
27
Scrambling
Need to randomize incoming data
Enables a number of tracking algorithms in... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Input Data
Tb
Tb
Tb
Tb
Tb
Tb
Bit Stolen Data
(sent/received data)
A
0
B0
A1
B2
A3
B3
A4
B5
A6
B6
A7
B8
Encoded Data
A0
B
0
A
1
B1
A2
B B
2
A A
4
3
B4
3
A5
A A
7
6
B7
B B
6
5
A8
B
8
Stolen Bit
Bit Inserted Data
Output Data B
A2
A A
0
1
A A
3
4
B B B B B B B B B
4
8
AA
6
5
A A
7
0
1
3
5
2
6
7
8
Inserted Dummy Bit
g1=17... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
31
Pilot insertion and FFT/IFFT
Pilot insertion
Pilots BPSK, prbs modulated
FFT a... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Figure by MIT OpenCourseWare.
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology.
Downloaded on [DD Month YYYY].
6.973 Communication System Design
34
Pilot tracking and channel correction
... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Z-16
Z-49
IJ*
Moving
Average
JF(k)
IP
Plateau
Detector
Tracking Data Path
Moving
Average
(16)
Jc(k)
Ig.1(.)
a
b
Combine
e
Figure by MIT OpenCourseWare.
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 2006.
MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of ... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
. McFarland, D. Su and J. Thomson "Design and implementation of an all-CMOS 802.11a
wireless LAN chipset," Communications Magazine, IEEE vol. 41, no. 8 SN - 0163-6804, pp. 160-168, 2003
[2] M. Krstic, K. Maharatna, A. Troya, E. Grass and U. Jagdhold "Implementation of an IEEE 802.11a
compliant low-power baseband pro... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Mahonen "On the single-chip implementation
of a Hiperlan/2 and IEEE 802.11a capable modem," Personal Communications, IEEE [see also IEEE
Wireless Communications] vol. 8, no. 6 SN - 1070-9916, pp. 48-57, 2001.
Cite as: Vladimir Stojanovic, course materials for 6.973 Communication System Design, Spring 200
6.
MIT Open... | https://ocw.mit.edu/courses/6-973-communication-system-design-spring-2006/2a8950c0b8a77687bae83a70ff459b08_lecture_4.pdf |
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.245: MULTIVARIABLE CONTROL SYSTEMS
by A. Megretski
Fundamentals of Model Order Reduction1
This lecture introduces basic principles of model order reduction for LTI systems, which
is about finding good low order approx... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
finding a ”re
duced” system Sˆ = Sˆk of complexity not larger than a given threshold k, such that the
distance between Sˆ and a given ”complex” system S is as small as possible. Alterna
tively, a maximal admissible distance between S and Sˆ can be given, in which case the
complexity k of Sˆ is to be minimized.
As i... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
ˆ
Gk )≡� is as small
as possible, where G, W are given stable transfer matrices (W −1 is also assumed to be
stable), and ≡�≡� denotes H-Infinity norm (L2 gain) of a stable system �. As a result
of model order reduction, G can be represented as a series connection of a lower order
“nominal plant” ˆG and a bounded unc... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
· ≡2 is the H2 norm then the
optimization becomes a standard least squares problem reducible to solving a system of
linear equations. If ≡ · ≡ = ≡ · ≡� is the H-infinity norm, the optimization is reduced to
solving a system of Linear Matrix Inequalities (LMI), a special class of convex optimization
problems solvable... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
≡�≡H of a
stable system � is never larger than its H-Infinity norm ≡�≡�, hence solving the Hankel
norm optimal model reduction problem yields a lower bound for the minimum in the H-
Infinity norm optimal model reduction. Moreover, H-Infinity norm of model reduction
error associated with a Hankel norm optimal reduced mo... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
tion, in which a given stable transfer function G = G(s) is to be approximated by the
ratio G(s) = p(s)/q(s), where p, q are polynomials of order m.
ˆ
One popular approach is moments matching. In the simplest form of moments match-
ing, an m-th order approximation G(s) = p(s)/q(s) (where p, q are polynomials of ord... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
a fixed
sampling rate T > 0 are available for the impulse response of a given CT LTI system. If
the system has order m, there would exist a Schur polynomial
q(z) = qmz m + qm−1z
m−1
+ · · · + q1z + q0,
with qm ∞= 0, such that
qmyk+m + qm−1yk+m−1 + · · · + q1yk+1 + q0yk = 0
for all k > 0. The idea is to define the... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
ancing
Here we introduce the fundamental notions of Hankel operator and Hankel singular num
bers associated with a given stable LTI system. For practical calculations, balancing of
stable LTI system is defined and explained.
6
8.2.1 Hankel Operator
The “convolution operator” f ∈� y = g � f associated with a LTI... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
Let G = G(s) be a m-by-k matrix-valued function of complex argument which is
defined on the extended imaginary axis jR � {∗}, where it satisfies the conditions of
real symmetry (i.e. elements of G(−jρ) are the complex conjugates of the corresponding
entries of G(jρ)) and continuity (i.e. G(jρ) converges to G(jρ0) as ρ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
→ 0, and produces a signal g which equals the corresponding output of G
for t > 0, and equals zero for t ∪ 0.
8.2.2 Rank and Gain of a Hankel Operator
As a linear transformation HG : Lk
2 (0, ∗), a Hankel operator has its rank
and L2 gain readily defined: the rank of HG is the maximal number of linearly independent... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
.
In other words, rank of HG equals the order of the stable part of G, and L2 gain of
HG is never larger than ≡G≡�.
Proof To show the gain/L-Infinity norm relation, let us return to the definition of HG
in the previous subsection. Note that the L2 norm of g0 is not larger than ≡G≡�≡f ≡2.
On the other hand, ≡g≡2 ∪ ≡g≡2.... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
an integer k > 0 find a matrix
ˆ
M = Mk of rank less than k which minimizes λmax(M − M ).
ˆ
ˆ
Since the set of matrices of given dimensions of rank less than k is not convex (unless k
is larger than one of the dimensions), one can expect that the matrix rank reduction prob
lem will be difficult to solve. However, in ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
v1) length λ2 = |M v2| when transformed
with M . Vector u2 is defined by M v2 = λ2u2. In general, the vector vr has unit length,
is orthogonal to v1, . . . , vr−1, and yields a maximal (over all vectors of unit length and
orthogonal to v1, . . . , vr−1) length λr = |M vr | when transformed with M . Vector ur is
defin... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
0 0 0
2 0 1
�
�
�
The framework of optimal matrix rank reduction can be easily extended to the class
of linear transformations M from one Hilbert space to another. (In the case of a real n-
by-m matrix M , the Hilbert spaces are Rm and Rn.) One potential complication is that
the vectors vr of maximal amplification d... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
(t > 0).
Note that calculation of Wc is easy via the Lyapunov equation
AWc + Wc = −BB�
A�
10
The energy of the future output produced by the initial state x(0) = x0, provided zero
input for t > 0 equals x� Wox0, where
0
�
W0 =
⎡
0
e A�tC �Ce Atdt
is the observability Gamian of the system. The ... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
11
By the definition of Wc, Wo,
Wo = M �M, Wc = N N �
Hence M, N can be represented in the form
M = U W 1/2 ,
o
N = W 1/2V �
c
where linear transformations U, V preserve the 2-norm.
Since the Hankel operator under consideration has the form
H = M N = U W 1/2
o
W 1/2V �
c
in order to find SVD of H, it is sufficient... | https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf |
2.160 Identification, Estimation, and Learning
Lecture Notes No. 2
February 13, 2006
2. Parameter Estimation for Deterministic Systems
2.1 Least Squares Estimation
u
1
u
2
M
mu
Deterministic
System
w/parameter
θ
M
y
Linearly parameterized model
Input-output
y = u b 1 + u b 2 + K + b u
2
1
m m
Param... | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf |
input-output equation.
− 2)
t u − 1)
T
t u − )]1
(
t u b
2
t u − 2)
t y ) =
(
ϕ (t ) = [
− 1) +
(
(
(
(
Using an estimated parameter vector θ ˆ , we can write a predictor that predicts the output
from inputs:
T ˆ
ˆ(
t y θ ) = ϕ (t ) θ
(2)
1
We evaluate the predictor’s performance by the squared error giv... | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf |
T
Under this condition, the optimal parameter vector is given by
ˆθ = PB
N
where P = ∑ (ϕ(t)ϕ (t))
t =1
N
T
B = ∑ t y )ϕ(t)
(
t=1
−1
= (ΦΦ
T )
−1
(8)
(9)
(10)
2.2 The Recursive Least-Squares Algorithm
While the above algorithm is for batch processing of whole data, we often need to
estimate pa... | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf |
)) = (ΦΦ
T
T )
3
B = ∑ t y )ϕ (t )
(
N
t = 1
Three steps for obtaining a recursive computation algorithm
a) Splitting Bt and Pt
From (10)
Bt = ∑ i y )ϕ (i ) = ∑ i y )ϕ (i ) +
(
(
t y )ϕ (t )
(
t − 1
t
i = 1
i = 1
Bt = Bt −1 +
t y )ϕ (t )
(
From (11)
t
− 1 = ∑ (ϕ (i )ϕ (i ))
P t
T
i = 1
P= ... | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf |
ϕ (t ))
T
)
t
t − 1
Postmultiplying ϕ ( P
t
)
T
t − 1
P ϕ (t )ϕ ( P
t
t
)
t − 1
T
T
Pt − 1ϕ (t )ϕ ( P
)
t
t − 1
= ( 1 +ϕ ( P
t ϕ (t ))
)
T
t − 1
(18)
P t − 1 − Pt
Therefore,
P t − 1ϕ (t )ϕ ( P
)
t
t − 1
Pt = Pt − 1 − ( 1 +ϕ ( P
t ϕ (t ))
)
Note that no matrix inversion is needed for updating... | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf |
1)
(
( 1 +ϕ ( P
t ϕ (t ))
) t − 1
t ϕ (t ))[
t y ) −ϕ (t )θ (t − 1)]
(
Pt − 1ϕ (t )
= ( 1 +ϕ ( P
T
) t − 1
ˆ
T
Replacing this by Κ t ∈ R m × 1 , we obtain (17)
The Recursive Least Squares (RLS) Algorithm
ˆ
Pt − 1ϕ (t )
ˆ
θ(t ) =θ (t − 1) + ( 1 +ϕ ( P
T
) t − 1
)
t
Pt − 1ϕ (t )ϕ ( P
t − 1
Pt = Pt − 1 − (... | https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf |
3. Convex functions
Convex Optimization — Boyd & Vandenberghe
basic properties and examples
operations that preserve convexity
the conjugate function
quasiconvex functions
log-concave and log-convex functions
convexity with respect to generalized inequalities
•
•
•
•
•
•
3–1
Definition
f : Rn
→
R is conve... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
R, for any a, b
R
powers: xα on R++, for 0
α
≤
logarithm: log x on R++
∈
1
≤
•
•
•
•
•
•
•
•
Convex functions
3–3
Examples on Rn and Rm×n
affine functions are convex and concave; all norms are convex
examples on Rn
•
norms:
affine function f (x) = aT x + b
�p = (
x
•
�
examples on Rm×n (m n matrices... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
one variable
example. f : Sn
→
R with f (X) = log det X, dom f = S++
n
g(t) = log det(X + tV ) =
log det X + log det(I + tX −1/2V X −1/2)
n
=
log det X +
log(1 + tλi)
�
i=1
where λi are the eigenvalues of X −1/2V X −1/2
g is concave in t (for any choice of X
0, V ); hence f is concave
≻
Convex functions ... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
,
∂x2
∂x1
�
, . . . ,
∂f (x)
∂xn
�
exists at each x
dom f
∈
1st-order condition: differentiable f with convex domain is convex iff
f (y)
≥
f (x) +
∇
f (x)T (y
−
x)
for all x, y
dom f
∈
f (y)
f (x) + ∇f (x)T (y − x)
(x, f (x))
first-order approximation of f is global underestimator
Convex functions
3–7
... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
f (x) =
Ax
�
b
2
2
�
−
f (x) = 2AT (Ax
b),
−
∇
2f (x) = 2AT A
∇
convex (for any A)
quadratic-over-linear: f (x, y) = x2/y
2f (x, y) =
∇
2
3
y
�
y
x
−
y
x
−
� �
T
0
�
�
)
y
,
x
(
f
2
1
0
2
convex for y > 0
Convex functions
1
y
0
0 −2 x
2
3–9
log-sum-exp: f (x) = log
n
=1 exp xk is convex
k
... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
sublevel set
α-sublevel set of f : Rn
R:
→
Cα =
x
{
∈
dom f
f (x)
α
}
≤
|
sublevel sets of convex functions are convex (converse is false)
epigraph of f : Rn
R:
→
epi f =
(x, t)
{
∈
Rn+1
x
|
∈
dom f, f (x)
t
}
≤
epi f
f
f is convex if and only if epi f is a convex set
Convex functions
3–11
Jensen’s ine... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
supremum
composition
minimization
perspective
•
•
•
•
•
•
Convex functions
3–13
Positive weighted sum & composition with affine function
nonnegative multiple: αf is convex if f is convex, α
0
≥
sum: f1 + f2 convex if f1, f2 convex (extends to infinite sums, integrals)
composition with affine function: f (Ax + ... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
≤
n
}
· · ·
Convex functions
3–15
Pointwise supremum
if f (x, y) is convex in x for each y
, then
∈ A
g(x) = sup f (x, y)
y∈A
is convex
examples
•
•
•
support function of a set C: SC(x) = supy∈C yT x is convex
distance to farthest point in a set C:
f (x) = sup
y∈C �
x
y
�
−
maximum eigenvalue of sym... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
h : Rk
R:
→
→
f (x) = h(g(x)) = h(g1(x), g2(x), . . . , gk(x))
f is convex if
gi convex, h convex, h˜ nondecreasing in each argument
gi concave, h convex, h˜ nonincreasing in each argument
proof (for n = 1, differentiable g, h)
f ′′ (x) = g ′ (x)T
2h(g(x))g ′ (x) +
h(g(x))T g ′′ (x)
∇
∇
examples
m
i=1
�
log ... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
ex functions
3–19
Perspective
the perspective of a function f : Rn
R is the function g : Rn
R
×
→
R,
→
g(x, t) = tf (x/t),
dom g =
(x, t)
{
x/t
|
∈
dom f, t > 0
}
g is convex if f is convex
examples
f (x) = xT x is convex; hence g(x, t) = xT x/t is convex for t > 0
negative logarithm f (x) =
g(x, t) = t lo... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
=
sup (xy + log x)
x>0
=
�
1
−
∞
−
log(
y) y < 0
−
otherwise
strictly convex quadratic f (x) = (1/2)xT Qx with Q
Sn
++
∈
f ∗ (y) =
=
(1/2)x T Qx)
−
sup (y T x
x
1
2
y T Q−1 y
Convex functions
3–22
Quasiconvex functions
f : Rn
→
R is quasiconvex if dom f is convex and the sublevel sets
Sα =
x
{
∈
d... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
2 ,
�
b
2
�
−
−
is quasiconvex
dom f =
x
{
x
| �
x
a
�2 ≤ �
−
b
�2}
−
Convex functions
3–24
internal rate of return
cash flow x = (x0, . . . , xn); xi is payment in period i (to us if xi > 0)
we assume x0 < 0 and x0 + x1 +
+ xn > 0
· · ·
present value of cash flow x, for interest rate r:
PV(x, r) =
n
�
... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
(x)T (y
x)
0
≤
−
∇f (x)
x
sums of quasiconvex functions are not necessarily quasiconvex
Convex functions
3–26
Log-concave and log-convex functions
a positive function f is log-concave if log f is concave:
f (θx + (1
θ)y)
−
≥
f (x)θf (y)1−θ
for 0
θ
≤
≤
1
f is log-convex if log f is convex
•
•
•
powers: ... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
log-concave, then
g(x) =
�
f (x, y) dy
is log-concave (not easy to show)
Convex functions
3–28
consequences of integration property
convolution f
∗
•
g of log-concave functions f , g is log-concave
g)(x) =
(f
∗
f (x
−
�
y)g(y)dy
if C
⊆
•
Rn convex and y is a random variable with log-concave pdf then
f... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
f (θx + (1
θ)y)
�K θf (x) + (1
−
−
θ)f (y)
for x, y
∈
dom f , 0
θ
≤
≤
1
example f : Sm
proof: for fixed z
→
∈
Sm , f (X) = X 2 is Sm -convex
+
Rm , zT X 2z =
Xz
2 is convex in X, i.e.,
�2
�
z T (θX + (1
for X, Y
Sm , 0
θ
≤
≤
∈
−
1
θ)Y )2 z
≤
θzT X 2 z + (1
θ)z T Y 2 z
−
therefore (θX + (1
θ)Y )2
−
�
... | https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf |
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.685 Electric Machines
Class Notes 6: DC (Commutator) and Permanent Magnet Machines
(cid:13)c 2005 James L. Kirtley Jr.
1
Introduction
Virtually all electric machines, and all practical electric machines employ some form of... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
rotor (this is the part that handles the electric power), and current is fed
to the armature through the brush/commutator system. The interaction magnetic field is provided
(in this picture) by a field winding. A permanent magnet field is applicable here, and we will have
quite a lot more to say about such arrangements be... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
t
· ~n da +
I
~
~v × B~ · dℓ
where ~v is the velocity of the contour. This gives us a convenient way of noting the apparent electric
field within a moving object (as in the conductors in a DC machine):
~E′
~
= E + ~v × B
~
Now, note that the armature conductors are moving through the magnetic field produced by
the stator... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
GΩIf
Va − GΩIf
Ra
Va − GΩIf
Ra
Now, note that these expressions define three regimes defined by rotational speed. The two
“break points” are at zero speed and at the “zero torque” speed:
Ω0 =
Va
GIf
3
Ra
+
Va
-
+
Ω
G I f
-
Figure 3: DC Machine Equivalent Circuit
Electrical
Mechanical
Figure 4: DC Machine Operating Re... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
excited machine is the shunt connection in which armature and field
are supplied by the same source, in parallel. This connection is not widely used any more: it does
4
Ra
+
Ω
G I f
-
Figure 5: Two-Chopper, separately excited machine hookup
+
V
-
Figure 6: Series Connection
not yield any meaningful ability to contro... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
2
f + GΩ) + (ωLa + ωLf )
2
(Ra + R
where ω is the electrical supply frequency. Note that, unlike other AC machines, the universal
motor is not limited in speed to the supply frequency. Appliance motors typically turn substantially
faster than the 3,600 RPM limit of AC motors, and this is one reason why they are so wide... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
), that coil is shorted by one of the brushes. The brush resistance causes the current in the
coil to decay. Then the leading commutator segment leaves the brush the current MUST reverse
(the trailing coil has current in it), and there is often sparking.
1.4 Commutation
Field Poles
Stator Yoke
Rotor
Ω
Field Winding
Arm... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
ductance, but there is still flux produced. This adds to the flux density on one side of the main
poles (possibly leading to saturation). To make the flux distribution more uniform and therefore
to avoid this saturation effect of quadrature axis flux, it is common in very highly rated machines
to wind compensation coils: es... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
temperatures,
most have sensitivity to demagnetizing fields, and proper machine design requires understanding
the materials well. These notes will not make you into seasoned permanent magnet machine de-
signers. They are, however, an attempt to get started, to develop some of the mathematical skills
required and to poin... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
units for magnetic field quantities, and these systems are often mixed up to form very
confusing units. We will try to stay away from the English system of units in which field intensity
H is measured in amperes per inch and flux density B in lines (actually, usually kilolines) per
In CGS units flux density is measured in ... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
in
Figure 13: A piece of permanent magnet material is wrapped in a magnetic circuit with effectively
infinite permeability. Assume the thing has some (finite) depth in the direction you can’t see. Now,
if we take Ampere’s law around the path described by the dotted line,
~H · dℓ = 0
~
I
since there is no current anywhere ... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
between Bm and Hm is inherently nonlinear, as shown in Figure 15
“load line” analysis of a nonlinear electronic circuit.
Now, one more ‘cut’ at this problem. Note that, at least for fairly large unit permeances the
slope of the magnet characteristic is fairly constant. In fact, for most of the permanent magnets
used in... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
surface of the magnetic circuit it will be zero over all of the magnetic
circuit (i.e. at both the top of the gap and the bottom of the magnet). Finally, note that we can’t
actually assume that the scalar potential satisfies Laplace’s equation everywhere in the problem.
In fact the divergence of M is zero everywhere exc... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
most common geometry that is used. The rotor
(armature) of the machine is a conventional, windings-in-slots type, just as we have already seen
for commutator machines. The field magnets are fastened (often just bonded) to the inside of a
steel tube that serves as the magnetic flux return path.
Assume for the purpose of fi... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
, and the empirical coefficient
ℓ =eff
ℓ∗
fℓ
where
A
N ≈ log
B
1 + B
(cid:18)
hm
R (cid:19)
B = 7.4 − 9.0
A = 0.9
hm
R
14
3.1.1 Voltage:
It is, in this case, simplest to consider voltage generated in a single wire first. If the machine is
running at angular velocity Ω, speed voltage is, while the wire is under a magnet,
N... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
is the number of active conductors, C
number of parallel paths. The motor coefficient is then:
tot
is the total number of conductors and m is the
K =
eff
Rℓ C Btot
m
d θ∗
π
3.2 Armature Resistance
The last element we need for first-order prediction of performance of the motor is the value of
armature resistance. The armatu... | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf |
Algorithmic Aspects of Machine Learning
Ankur Moitra
c© Draft date March 30, 2014
Algorithmic Aspects of Machine Learning
©2015 by Ankur Moitra.
Note: These are unpolished, incomplete course notes.
Developed for educational use at MIT and for publication through MIT OpenCourseware.
Contents
Contents
Preface
1 I... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
. . . . . . . . . . . . . . . . . . . 44
3.6
Independent Component Analysis . . . . . . . . . . . . . . . . . . . . 50
4 Sparse Recovery
53
4.1 Basics
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.2 Uniqueness and Uncertainty Principles . . . . . . . . . . . . . . . . . 56
4.3 Pursu... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
. . . . . . . . . . . . . . . . . . . . 83
6.2 Clustering-Based Algorithms . . . . . . . . . . . . . . . . . . . . . . . 86
6.3 Discussion of Density Estimation
. . . . . . . . . . . . . . . . . . . . 89
6.4 Clustering-Free Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 91
6.5 A Univariate Algorithm
.... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
ufei Zhao, Hi
lary Finucane, Matthew Johnson, Kayhan Batmanghelich, Gautam Kamath, George
Chen, Pratiksha Thaker, Mohammad Bavarian, Vlad Firoiu, Madalina Persu, Cameron
Musco, Christopher Musco, Jing Lin, Timothy Chu, Yin-Tat Lee, Josh Alman,
Nathan Pinsker and Adam Bouland.
1
Chapter 1
Introduction
This cour... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
they work. In some cases, we will even be able to analyze approaches that
practitioners already use and give new insights into their behavior.
Question 2 Can new models – that better represent the instances we actually want
to solve in practice – be the inspiration for developing fundamentally new algorithms
for ma... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
=1
where ui is the ith column of U , vi is the ith column of V and σi is the ith diagonal
entry of Σ.
Every matrix has a singular value decomposition! In fact, this representation
can be quite useful in understanding the behavior of a linear operator or in general
5
6
CHAPTER 2. NONNEGATIVE MATRIX FACTORIZATION... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
is so widely
useful: if we are given data in the form of a matrix M but we believe that the data
is approximately low-rank, a natural approach to making use of this structure is to
instead work with the best rank k approximation to M . This theorem is quite robust
and holds even when we change how we measure how go... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
1 = argmax
IuT M I2
IuI2
and the maximum is σ1. Similarly if we want to project onto a two-dimensional
subspace so as to maximize the projected variance we should project on span(u1, u2).
Relatedly
IuT M I2
u2 = minu1 argmaxu⊥u1 IuI2
and the maximum is σ2. This is called the variational characterization of sing... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
be possible to cluster the documents
just knowing what words each one contains but not their order. This is often called
the bag-of-words assumption.
The idea behind latent semantic indexing is to compute the singular value
decomposition of M and use this for information retrieval and clustering. More
precisely, i... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
undesirable
properties:
(a) “topics” are orthonormal
Consider topics like “politics” and “finance”. Are the sets of words that describe
these topics uncorrelated? No!
(b) “topics” contain negative values
This is more subtle, but negative words can be useful to signal that document is
not about a given topic. But ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
ice agree! It can be computed efficiently, and it has many uses. But in spite of this
intractability result, nonnegative matrix factorization really is used in practice. The
standard approach is to use alternating minimization:
Alternating Minimization: This problem is non-convex, but suppose we
guess A. Then computi... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
for only a few hundred topics. So one way to reformulate the question is to ask
what its complexity is as a function of k. We will essentially resolve this using
algebraic techniques. Nevertheless if we want even better algorithms, we need more
10
CHAPTER 2. NONNEGATIVE MATRIX FACTORIZATION
assumptions. We will s... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
2.2.1 The nonnegative rank of M – denoted by rank+(M )– is the small
est k such that there are nonnegative matrices A and W of size m × k and k × n
respectively that satisfy M = AW .
Equivalently, rank+(M ) is the smallest k such that there are k nonnegative rank
one matrices {Mi} that satisfy M = Mi.
(cid:80)
i
B... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
easy to see that rank(M ) = 3 However, M has zeros along the diagonal and
non-zeros off it. Furthermore for any rank one nonnegative matrix Mi, its pattern
of zeros and non-zeros is a combinatorial rectangle - i.e. the intersection of some set
of rows and columns - and a standard argument implies that rank+(M ) = Ω(l... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
nite amount of time. But indeed there are
algorithms (that run in some fixed amount of time) to decide whether a system
of polynomial inequalities has a solution or not in the real RAM model. These
algorithms can also compute an implicit representation of the solution, if there is
12
CHAPTER 2. NONNEGATIVE MATRIX ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
. So even if k = O(1), we would need a linear number of
variables and the running time would be exponential. However we could hope that
even though the naive representation uses many variables, perhaps there is a more
clever representation that uses many fewer variables. Can we reduce the number of
variables in the... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
one of the foundational results in the field, and is often called quantifier
elimination [110], [107]. To gain some familiarity with this notion, consider the
case of algebraic sets (defined analogously as above, but with polynomial equality
constraints instead of inequalities). Indeed, the above theorem implies that t... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
))
Then as x ranges over Rr the number of distinct sign patterns is at most (nD)r .
A priori we could have expected as many as 3n sign patterns. In fact, algorithms
for solving systems of polynomial inequalities are based on cleverly enumerating the
set of sign patterns so that the total running time is dominated by... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
and row rank respec
tively.
Proof: The span of the columns of A must contain the columns of M and similarly
the span of the rows of W must contain the rows of M . Since rank(M ) = k and
A and W have k columns and rows respectively we conclude that the A and W
must have full column and row rank respectively. Moreov... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
basis to rewrite M as MR which is an k × m matrix, and there
is an k × k linear transformation T (obtained from A+ and the change of basis) so
that T MR = W . A similar approach works for W , and hence we get a new system:
(2.3)
⎧
⎪⎨
⎪⎩
MC ST MR = M
MC S
≥ 0
≥ 0
T MR
2.3. STABILITY AND SEPARABILITY
15
Th... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
yields a doubly-exponential time algorithm as a function of k. The crucial
observation is that even if A does not have full column rank, we could write a system
of polynomial inequalities that has a pseudo-inverse for each set of its columns that
(cid:1)
k
is full rank (and similarly for W ). However A could have a... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
its facets. Is there a simplex
K with P ⊆ K ⊆ Q?
We would like to connect this problem to nonnegative matrix factorization, since
it will help us build up a geometric view of the problem. Consider the following
problem:
Given nonnegative matrices M and A, does there exists W ≥ 0 such
that M = AW ?
The answer is ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
= AW (where the inner-dimension equals the rank of
M ), the column spaces of M , U and A are identical. Similarly the rows spaces of
M , V and W are also identical.
The more interesting aspect of the proof is the equivalence between (P0) and
the intermediate simplex problem. The translation is:
(a) rows of U ⇐⇒ ve... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
urbations to the problem. Motivated by issues of
uniqueness and robustness, Donoho and Stodden [54] introduced a condition called
separability that alleviates many of these problems, which we will discuss in the next
subsection.
Separability
Definition 2.3.1 We call A separable if, for every column of A, there exis... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
nance.
Why do anchor words help? It is easy to see that if A is separable, then the
rows of W appear as rows of M (after scaling). Hence we just need to determine
which rows of M correspond to anchor words. We know from our discussion in
Section 2.3 that (if we scale M , A and W so that their rows sum to one) the c... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
start. Hence the anchor words
that are deleted are redundant and we could just as well do without them.
Separable NMF [13]
Input: matrix M ∈ Rn×m satisfying the conditions in Theorem 2.3.2
Output: A, W
Run Find Anchors on M , let W be the output
Solve for nonnegative A that minimizes IM − AW IF (convex programmin... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
involves projecting a
point into an k − 1-dimensional simplex.
2.4 Topic Models
Here we will consider a related problem called topic modeling; see [28] for a compre
hensive introduction. This problem is intimately related to nonnegative matrix fac
torization, with two crucial differences. Again there is some factor... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
we knew A we cannot compute W
exactly. Alternatively, WM and M can be quite different since the former may be
sparse while the latter is dense. Are there provable algorithms for topic modeling?
WM .
The Gram Matrix
We will follow an approach of Arora, Ge and Moitra [14]. At first this seems like a
fundamentally diff... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
precisely:
Lemma 2.4.3 G = ARAT
Proof: Let w1 denote the first word and let t1 denote the topic of w1 (and similarly
for w2). We can expand P[w1 = a, w2 = b] as:
r
P[w1 = a, w2 = b|t1 = i, t2 = j]P[t1 = i, t2 = j]
i,j
and the lemma is now immediate. •
The key observation is that G has a separable nonnegative mat... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
P(w1 = w'|w2 = w, t2 = t) = P(word1 = w'|topic2 = t)
= P(word1 = w'|word2 = anchor(t)),
22
CHAPTER 2. NONNEGATIVE MATRIX FACTORIZATION
which we can compute from G after having determined the anchor words. Hence:
r
P(w1 = w ' |w2 = w) =
P(word1 = w ' |word2 = anchor(t))P(t2 = t|w2 = w)
t
which we can think of ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
and sample complexity (number of documents) is poly(n, 1/p, 1/ε, 1/σmin(R)),
provided documents have length at least two.
A
In the next subsection we describe some experimental results.
Recover [14], [12]
Input: term-by-document matrix M ∈ Rn×m
Output: A, R
Compute GA, compute P(w1 = w|w2 = w ' )
Run Find Anchor... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
In this chapter we will study algorithms for tensor decompositions and their appli
cations to statistical inference.
3.1 Basics
Here we will introduce the basics of tensors. A matrix is an order two tensor – it is
indexed by a pair of numbers. In general a tensor is indexed over k-tuples, and k is
called the order... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
was a famous psychologist who postulated that there are essen
tially two types of intelligence: mathematical and verbal. In particular, he believed
that how well a student performs at a variety of tests depends only on their intrinsic
aptitudes along these two axes. To test his theory, he set up a study where a thou... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
we determine {xi}i and {yi}i if we know M ?
Actually, there are only trivial conditions under which we can uniquely determine
these factors. If r = 1 of if we know for a priori reasons that the vectors {xi}i and
{yi}i are orthogonal, then we can. But in general we could take the singular value
decomposition of M = ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
further assumptions) but most
tensor problems are hard [71]! Even worse, many of the standard relations in linear
algebra do not hold and even the definitions are in some cases not well-defined.
(a) For a matrix A, dim(span({Ai}i)) = dim(span({Aj }j )) (the column rank
equals the row rank).
However no such relation ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
.
Consider the following 2 × 2 × 2 tensor T , over R:
T =
0 1
1 0
,
1 0
0 0
.
We will omit the proof that T has rank 3, but show that T admits an arbitrarily
close rank 2 approximation. Consider the following matrices
Sn =
n 1
1
1
n
,
1
1
n
1
n
1
2
n
and Rn =
n 0
0 0
,
0 0
0 0
.
(cid:1)
(cid:0)
, and henc... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Dr. Robert Jennrich.
We will state and prove a version of this result that is more general, following the
approach of Leurgans, Ross and Abel [87]:
Theorem 3.1.3 [70], [87] Consider a tensor
r
r
ui ⊗ vi ⊗ wi
T =
i=1
where each set of vectors {ui}i and {vi}iare linearly independent, and moreover each
pair of ve... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.