text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
ectors
Pair up ui and vi iff their eigenvalues are reciprocals
Solve for wi in T =
End
(cid:80)
r
=1 ui ⊗ vi ⊗ wi
i
Recall that Ta is just the weighted sum of matrix slices through T , each weighted
by ai. It is easy to see that:
Claim 3.1.4 Ta =
�
(cid:80)
r
(cid:104)
i=1
T
wi, a uivi and Tb =
(cid:105)
�
... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
, and similarly for V . •
Now to complete the proof of the theorem, notice that ui and vi as eigen
vectors of Ta(Tb)+ and Tb(Ta)+ respectively, have eigenvalues of (Da)i,i(Db)−1 and
)−1 (Again, the diagonals of Da(Db)+ are distinct almost surely and so
(Db)i,i(Da i,i .
vi is the only eigenvector that ui could be pa... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
the proof of the theorem.
Note that if T is size m×n×p then the conditions of the theorem can only hold
if r ≤ min(m, n). There are extensions of the above algorithm that work for higher
order tensors even if r is larger than any of the dimensions of its factors [48], [66],
[26] and there are interesting applicatio... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
position itself is stable. More precisely, let M = U DU −1, where D is a diagonal
matrix. If we are given W = M + E, when can we recover good estimates to U ?
M
Intuitively, if any of the diagonal entries in D are close or if U is ill-conditioned,
then even a small perturbation E can drastically change the eigendeco... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
IbI
In other words, the condition number controls the relative error when solving a linear
system.
32
CHAPTER 3. TENSOR METHODS
Gershgorin’s Disk Theorem
Recall our first intermediate goal is to show that M + E is diagonalizable, and we
will invoke the following theorem:
Theorem 3.2.2 The eigenvalues of a matri... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Moreover we can apply Theorem 3.2.2 to U −1M U W = D + U −1EU
and if U is well-conditioned and E is sufficiently small, the radii will be much
smaller than the closest pair of diagonal entries in D. Hence we conclude that the
eigenvalues of U −1 WM U and also those of WM are distinct, and hence the latter can
be diag... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
that W = M + E.
M
we get
r
cj λj uj + EuAi = λAiuAi. =⇒
r
cj (λj − λAi)uj = −EuAi.
j
j
T be the jth row of U −1 . Left-multiplying both sides of the above equation by
Let wj
wj
T , we get
cj (λj − Aλi) = −wj
T EuAi.
Recall we have assumed that the eigenvalues of M are separated. Hence if E is
sufficiently smal... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
+
→ Ta b . We have already established that if
conditions. It follows that
E → 0, then the eigendecompositions of M and M + E converge. Finally we
conclude that the algorithm in Theorem 3.1.3 computes factors U and V which
+ → Tb
A TA+
Ta b
A
A
(cid:54)
34
CHAPTER 3. TENSOR METHODS
converge to the true fa... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
conditions, for r = (1 + ε)n for any ε > 0?
For example, it is natural to consider a smoothed analysis model for tensor decompo
sition [26] where the factors of T are perturbed and hence not adversarially chosen.
The above uniqueness theorem would apply up to r = 3/2n − O(1) but there are no
known algorithms for te... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
think of s(·) : V → Σ as a random function that assigns
states to vertices where the marginal distribution on s(r) is πr and
P uv = P(s(v) = j|s(u) = i),
ij
Note that s(v) is independent of s(t) conditioned on s(u) whenever the (unique)
shortest path from v to t in the tree passes through u.
Our goal is to learn t... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Steel [109] and Erdos, Steel, Szekely, and
Warnow [57]. And from this, we can apply tensor methods to find the transition
matrices following the approach of Chang [36] and later Mossel and Roch [96].
Finding the Topology
The basic idea here is to define an appropriate distance function [109] on the edges
of the tree... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Reconstructing Quartets
Here we will use ψ to compute the topology. Fix four leaves a, b, c, and d, and
there are exactly three possible induced topologies between these leaves, given in
Figure 3.1. (Here by induced topology, we mean delete edges not on any shortest
path between any pair of the four leaves, and con... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
each quartet test. Hence
we can pair up all of the leaves so that they have the same parent, and it is not hard
to extend this approach to recover the topology of the tree.
Handling Noise
Note that we can only approximate F ab from our samples. This translates into a
good approximation of ψab when a and b are clos... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
tensor decom
position of T whose factors are unique up to rescaling. Furthermore the factors are
probability distributions and hence we can compute their proper normalization. We
will call this procedure a star test. (Indeed, the algorithm for tensor decompositions
in Section 3.1 has been re-discovered many times a... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
model that produces
almost the same joint distribution on the leaves. (In particular, if there are multiple
ways to label the internal nodes to produce the same joint distribution on the leaves,
we are indifferent to them).
Remark 3.3.1 HMMs are a special case of phylogenetic trees where the underlying
topology is ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
1S is the indicator function for S. Furthermore if
we choose Ω(n log n) samples then A is w.h.p. full column rank and so this solution
is unique. We can then find S by solving a linear system over GF (2).
Yet a slight change in the above problem does not change the sample com
plexity but makes the problem drasticall... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
not full
rank. Consider an HMM that has n hidden nodes, where the ith hidden node
encodes is used to represent the ith coordinate of X and the running parity
χSi (X) :=
r
i '≤i,i '∈S
X(i ' ) mod 2.
40
CHAPTER 3. TENSOR METHODS
Hence each node has four possible states. We can define the following transition
... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Detection
Here we give applications of tensor methods to community detection. There are
many settings in which we would like to discover communities - that is, groups of
people with strong ties. Here we will focus on graph theoretic approaches, where
we will think of a community as a set of nodes that are better co... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
where
q < p too. (In particular, when q = 0 we could ask to find a k-coloring of this
random graph). Regardless, we observe a random graph generated from the above
model and our goal is to recover the partition described by π.
When is this information theoretically possible? In fact even for k = 2 where
π is a bise... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
best rank k approximation to the adjacency matrix. For the full details,
see [91].
We will instead follow the approach in Anandkumar et al [9] that makes use
of tensor decompositions instead. In fact, the algorithm of [9] also works in the
mixed membership model where we allow each node to be a distribution over [k... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
(cid:40)
q
i = j
p i = j
.
Consider the product ΠR. The ith column of ΠR encodes the probability that
an edge occurs from a vertex in community i to a given other vertex:
(ΠR)xi = Pr[(x, a) ∈ E|π(a) = i].
We will use (ΠR)A
i to denote the matrix ΠR restricted to the ith column and
the rows in A, and similarly for... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
well-conditioned so that we can approx
imate them from an approximation TA to T . See Section 3.2.
(c) We can recover π from {(ΠR)A}i up to relabeling.
i
Part (a) Let {Xa,b,c}a,b,c be a partition of X into almost equal sized sets, one for
each a ∈ A, b ∈ B and c ∈ C. Then
ATa,b,c =
|{x ∈ Xa,b,c|(x, a), (x, b), (... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
We will not cover this latter extension because we will instead explain those types
of techniques in the setting of topic models next.
We note that there are powerful extensions to the block-stochastic model that
are called semi-random models. Roughly, these models allow an “adversary” to add
edges between nodes in... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
unique up to rescaling, and
the algorithm will find them. Finally, each column in A is a distribution and so
we can properly normalize these columns and compute the values pi too. Recall in
Section 3.2 we analyzed the noise tolerance of our tensor decomposition algorithm.
It is easy to see that this algorithm recove... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
j according to the Dirichlet
distribution Dir({αi}i)
(b) Repeat Nj times: choose a topic i from wj , and choose a word according to
the distribution Ai.
The Dirichlet distribution is defined as
�
(cid:89)
p(x) ∝
αi−1 for x ∈ Δ
xi
i
Note that if documents are long (say Nj > n log n) then in a pure topic model, pa... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
3. However the bad news is that a Tucker
decomposition is in general not unique, so even if we are given T we cannot nec
essarily compute the above decomposition whose factors are the topics in the topic
model.
How can we extend the tensor spectral approach to work with mixed models?
The elegant approach of Anandk... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
LS
47
These experiments result in tensors whose factors are the same, but whose cores
differ in their natural Tucker decomposition.
Definition 3.5.3 Let µ, M and D be the first, second and third order moments of
the Dirichlet distribution.
More precisely, let µi be the probability that the first word in a random docu... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
= i, t2 = j, t3 = k]
i,j,k
and the lemma is now immediate. •
(cid:54)=
a,b,c
a,c,b
Note that T 2 = T 2
Nevertheless, we can symmetrize T 2 in the natural way: Set Sa,b,c
T 2
c,a,b.
because two of the words come from the same document.
b,c,a +
for any permutation π : {a, b, c} → {a, b, c}.
a,b,c + T 2
Hence S... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
statement about the
moments of a Dirichlet distribution. In fact, we can think about the Dirichlet as
instead being defined by the following combinatorial process:
(a) Initially, there are αi balls of each color i
(b) Repeat C times: choose a ball at random, place it back with one more of its
own color
This proces... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
formulas in the other cases too.
. And the probability that the third ball is color k is
αi+1
α0+1
0
Note that it is much easier to think about only the numerators in the above
formulas. If we can prove that following relation for just the numerators
D + 2µ ⊗3 − M ⊗ µ(all three ways) = diag({2αi}i)
it is easy t... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
:54)=
The reason this case is tricky is because the terms M ⊗ µ(all three ways) do not
all count the same. If we think of µ along the third dimension of the tensor then
the ith topic occurs twice in the same document, but if instead we think of µ as
along either the first or second dimension of the tensor, even thoug... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
model, provided we
are given at least poly(n, 1/t, 1/σr, 1/αmin) documents of length at least thee, where
n is the size of the vocabulary and σr is the smallest singular value of A and αmin
is the smallest αi.
Similarly, there are algorithms for community detection in mixed models too, where
for each node u we cho... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
of the conversations given by the
corresponding rows of A. If we think of the conversations as independent
and memoryless, can we disentangle them?
Such problems are also often referred to as blind source separation. We will follow
an approach of Frieze, Jerrum and Kannan [62].
Step 1
We can always transform the ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
to consider higher order tensors. This time we will proceed
in a different manner.
Since M > 0 we can find B such that M = BBT . How are B and A related?
In fact, we can write
BBT = AAT ⇒ B−1AAT (B−1)T = I
and this implies that B−1A is orthogonal since a square matrix times its own trans
pose is the identity if and... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
optimization problem:
min E[(v T x)4]
�v�2=1
What are its local minima?
�
�
E (v T x)4 = E
r
(vixi)4 + 6
r
(vixi)2(vj xj )2 =
r
v 4
i
=
i
E(x 4
i ) + 6
r
i
i v 2
v 2
j + 3
r
ij
v 4
i − 3
r
r
v 4
i + 3(
v 2
i )
ij
i
r
v 4
i
=
i
i
(cid:0)
i
− 3 + 3
(cid:1)
E x 4
i
52
CHA... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
with since
they satisfy the appealing property that the cumulants of the sum of independent
variables Xi and Xj are the themselves the sum of the cumulants of Xi and Xj .
This is precisely the property we exploited here.
Chapter 4
Sparse Recovery
In this chapter we will study algorithms for sparse recovery: give... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
take advantage of the sparsity of x. We
will show that under certain conditions on A, if x is sparse enough then indeed it is
the uniquely sparsest solution to Ax = b.
53
54
CHAPTER 4. SPARSE RECOVERY
Our first goal is to prove that finding the sparsest solution to a linear system
is hard. We will begin with the ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
∈I is linearly dependent.
(cid:80)
i∈I
αi = 0 if and only if the set of vectors
Proof: Consider the determinant of the matrix whose columns are {Γw(αi)}i∈I .
Then the proof is based on the following observations:
(cid:0) (cid:1)
(a) The determinant is a polynomial in the variables αi with total degree n
2
+ 1,
... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
could find the sparsest
non-zero vector in S we could solve the above variant of subset sum.
In fact, this same proof immediately yields an interesting result in computa
tional geometry (that was “open” several years after Khachiyan’s paper).
Definition 4.1.3 A set of m vectors in Rn is in general position if every s... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
and suppose Ix−iI0 = k. We can build a
solution x to Ax = 0 with IxI0 = k + 1 by setting the i∗th coordinate of x to be
−1. Indeed, it is not hard to see that x is the sparsest solution to Ax = 0. •
−i
−i
56
CHAPTER 4. SPARSE RECOVERY
4.2 Uniqueness and Uncertainty Principles
Incoherence
Here we will define... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
are even better constructions of incoherent
vectors that remove the logarithmic factors; this is almost optimal since for any
m > n, any set of m vectors in Rn has incoherence at least √1
.
n
log m
n
√
(cid:113)
We will return to the following example several times: Consider the matrix
A = [I, D], where I ∈ Rn×... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
AND UNCERTAINTY PRINCIPLES
57
Informally, a signal cannot be too localized in both the time and frequency domains
simultaneously!
Proof: Since U and V are orthonormal we have that IbI2 = IαI2 = IβI2. We
can rewrite b as either U α or V β and hence IbI2 = |βT (V T U )α|. Because A is
incoherent, we can conclude tha... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
uniqueness result:
Claim 4.2.3 Suppose A = [U, V ] where U and V are orthonormal and A is µ
incoherent. If Ax = b and IxI0 < µ
1 , then x is the uniquely sparsest solution.
Proof: Consider any alternative solution AxA = b. Set y = x − xA in which case y ∈
ker(A). Write y as y = [αy, βy]T and since Ay = 0, we have ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
of points, since deciding whether or not the Kruskal rank is
n is precisely the problem of deciding whether the points are in general position.
Nevertheless, the Kruskal rank of A is the right parameter for analyzing how sparse
x must be in order for it to be the uniquely sparest solution to Ax = b. Suppose
the Kru... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
matrix of AT A is full rank for r = 1/µ.
So consider any such a submatrix. The diagonals are one, and the off-diagonals
have absolute value at most µ by assumption. We can now apply Gershgorin’s disk
theorem and conclude that the eigenvalues of the submatrix are strictly greater than
zero provided that r ≤ 1/µ (whic... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
= ∅.
For f = 1, 2, . . . , k
Choose column j that maximizes
Add j to S.
|(Aj ,r�−1)|
�
�
Aj
(cid:107)
(cid:107)
2
2
.
(cid:96)
�
Set r = projU ⊥ (b), where U = span(AS ).
If r = 0, break.
(cid:96)
�
End
Solve for xS : AS xS = b. Set xS¯ = 0.
Let A be µ-incoherent and suppose that there is a solution x with k... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
104)
�
(cid:105)
(cid:96)
�
Lemma 4.3.1 If S ⊆ T at the start of a stage, then the algorithm selects j ∈ T .
60
CHAPTER 4. SPARSE RECOVERY
We will first prove a helper lemma:
Lemma 4.3.2 If r −1 is supported in T at the start of a stage, then the algorithm
selects j ∈ T .
�
(cid:96)
(cid:80)
(cid:96)
�
Proof: Let... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
k '
r
|� A , Ai �| ≤ |y1|k ' µ,
(cid:96)
�
i
�
=1
(cid:96)
which, under the assumption that k ' ≤ k < 1/(2µ), is strictly upper-bounded by
|y1|/2. Since |y1|(1/2 + µ) > |y1|/2, we conclude that condition (4.1) holds, guaran
teeing that the algorithm selects... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
uit
We note that matching pursuit differs from orthogonal matching pursuit in a crucial
way: In the latter, we recompute the coefficients xi for i ∈ S at the end of each
stage because we project b perpendicular to U . However we could hope that these
coefficients do not need to be adjusted much when we add a new index j... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
. , 1]; the second row is
√1
1
n
[1, ω, ω2 , . . .], etc.
n
We will make use of following basic properties of F :
62
CHAPTER 4. SPARSE RECOVERY
(a) F is orthonormal: F H F = F F H , where F H is the Hermitian transpose of F
(b) F diagonalizes the convolution operator
In particular, we will define the c... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
= M cx = F H diag(Ac)F x = F H diag(Ac)xA = F H (Ac 8 xA), and
this completes the proof. •
We introduce the following helper polynomial, in order to describe Prony’s method:
Definition 4.4.4 (Helper polynomial)
p(z) =
�
(cid:89)
ω−b(ωb − z)
b∈supp(x)
= 1 + λ1z + . . . + λkz k .
Claim 4.4.5 If we know p(z), we ca... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
= 0, and so:
Corollary 4.4.7 M xv = 0
x
Proof: We can apply Claim 4.4.2 to rewrite x 8 vA = 0 as xA ∗ v = A0 = 0, and this
implies the corollary. •
Let us write out this linear system explicitly:
⎡
⎢
⎢
⎢
⎢
⎢
xM x =
⎢
⎢
⎢
⎢
⎢
⎣
xn
A
x1
A
.
.
.
xk+1
A
.
.
.
Ax2k
.
.
.
xn−1
A
xn
A
.
.
.
xk
A
.
.
.
x2... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
This matrix only
involves the values that we do know!
Consider
⎡
xk−1
A
xk
A
.
. .
⎢
⎣
x2k−1 x2k−1
A
A
. . .
x1
A
. . .
xk
A
⎡
⎤
⎢
⎥
⎢
⎦
⎢
⎣
λ1
λ2
. . .
λk
⎤
⎡
⎥
⎥
⎥
⎦
⎢
⎢
= −
⎢
⎣
xk+1
A
. . .
. .
.
x2k
A
⎤
⎥
⎥
⎥
⎦
It turns out that this linear system is full rank, so λ is the unique solut... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
yet have x be the uniquely sparsest solution to Ax = b. A random
matrix has large Kruskal rank, and what we will need for compressed sensing is a
robust analogue of Kruskal rank:
x
Definition 4.5.1 A matrix A is RIP with constant δk if for all k-sparse vectors x
we have:
(1 − δk)IxI2 ≤ IAxI2 ≤ (1 + δk)IxI2
2
2
... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
lectures (and is indeed much stronger).
The natural (but intractable) approach is:
(P 0)
min IwI0 s.t. Aw = b
Since this is computationally hard to solve (for all A) we will work with the f1
relaxation:
(P 1)
min IwI1 s.t. Aw = b
and we will prove conditions under which the solution w to this optimization probl... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
4.5.6 Γ ⊆ Rn is an almost Euclidean subsection if for all v ∈ Γ,
1
√ IvI1 ≤ IvI2 ≤ √ IvI1
n
C
n
Note that the inequality √1
IvI1 ≤ IvI2 holds for all vectors, hence the second
n
inequality is the important part. What we are requiring is that the f1 and f2 norms
are almost equivalent after rescaling.
Question 8... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
p(v)| ≥ S.
Proof:
IvI1 =
r
j∈supp(v)
|vj | ≤
(cid:112)
|supp(v)| · IvI2 ≤
(cid:112)
C
|supp(v)| √
n
IvI1
where the last inequality uses the property that Γ is almost Euclidean. The last
inequality implies the claim. •
4.5. COMPRESSED SENSING
67
Now we can draw an analogy with error correcting code... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
to coordinates in
Λ. Similarly let vΛ denote the restriction of v to Λ.
¯
Claim 4.5.9 Suppose v ∈ Γ and Λ ⊆ [n] and |Λ| < S/16. Then
IvΛI1 <
IvI1
4
IvΛI1 ≤
(cid:112)
|Λ|IvΛI2 ≤
(cid:112)
C
|Λ|√
n
IvI1
Proof:
•
Hence not only do vectors in Γ have a linear number of non-zeros, but in fact their
f1 norm is s... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
This implies the
lemma. •
Hence we can use almost Euclidean subsections to get exact sparse recovery up to
(cid:16)
IxI0 = S/16 = Ω(n/C 2) = Ω
(cid:17)
n
log n/m
Next we will consider stable recovery. Our main theorem is:
Theorem 4.5.11 Let Γ = ker(A) be an almost Euclidean subspace with parameter
n
C. Let S =... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
I1. Hence
4
Ix − wI1 ≤ Ix − wI1 + 2σ S (x) .
16
1
2
This completes the proof. •
Notice that in the above argument, it was the geometric properties of Γ which
played the main role. There are a number of proofs that basis pursuit works, but the
advantage of the one we presented here is that it clarifies the conne... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
con
structing an almost Euclidean subspace Γ with parameter C ∼ (log n)log log log n
We note that it is easy to achieve weaker guarantees, such as ∀0 = v ∈ Γ, supp(v) =
Ω(n), but these do not suffice for compressed sensing since we also require that the
f1 weight is spread out too.
(cid:54)=
Chapter 5
Dictionary... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
ing a dictionary is only
useful if you can use it. The three most important cases where we can do sparse
recovery are:
(a) A has full column rank
(b) A is incoherent
71
72
(c) A is RIP
CHAPTER 5. DICTIONARY LEARNING
We will present an algorithm of Spielman, Wang and Wright [108] that succeeds
(under reasonab... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
the basis it uses to
represent natural images.
Dictionary learning, or as it is often called sparse coding, is a basic building
block of many machine learning systems. This algorithmic primitive arises in appli
cations ranging from denoising, edge-detection, super-resolution and compression.
Dictionary learning is... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
• Compute the first singular vector v1 of the residual matrix, and update the
column Ai to v1
In practice, these algorithms are highly sensitive to their starting conditions. In
the next sections, we will present alternative approaches to dictionary learning with
provable guarantees. Recall that some dictionaries ar... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
A exactly from a polynomial number of samples
of the form bi.
Theorem 5.2.1 [108] There is a polynomial time algorithm to learn a full-rank
dictionary A exactly given a polynomial number of samples from the above model.
Strictly speaking, if the coordinates of xi are perfectly independent then we
could recover A a... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
B is a (scaled) row of
X. In fact we can transform the above linear program into a simpler one that will
be easier to analyze:
(Q1)
min Iz T XI1 s.t. c T z = 1
There is a bijection between the solutions of this linear program and those of (P 1)
given by z = AT w and c = A−1r. Let us set r to be a column of B and ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Suppose that z∗ is an optimal solution to (Q1), where c = xi is a column of X.
Set J = supp(c), we can write z∗ = z0 + z1 where z0 is supported in J and z1 is
supported in J. Note that cT z0 = cT z∗. We would like to show that z0 is at least as
good of a solution to (Q1) as z∗ is. In particular we want to prove that... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
apply a union bound over an ε-net of this
space to complete the argument). Then S is a random set and we can compute:
E[Iz1
T XSI1] =
|S|
p
E[Iz1
T XI1]
The expected size of S is p × E[|supp(xi)|] × θ = θ2np = o(p). Together, these imply
that
T
E[Iz1 XI1 − 2Iz1 XSI1] = 1 −
T
T
E[Iz1 XI1]
(cid:18)
(cid:19)
2... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
non-zero entry. It is easy to see that:
Claim 5.2.4 If each column of X J has at most one non-zero entry, then
E[IzJ
T X J I1] = C
p
|J|
IzJ I1
where C is the expected absolute value of a non-zero in X.
So we can establish Step 2 by bounding the contribution of columns of X J
with two or more non-zeros. Hence thi... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
columns
(which is the best we could hope for).
5.3 Overcomplete Dictionaries
Here we will present a recent algorithm of Arora, Ge and Moitra [15] that works for
incoherent and overcomplete dictionaries. The crucial idea behind the algorithm is
a connection to an overlapping clustering problem. We will consider a m... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
samples b1, b2, ..., bp the intersection graph G is a graph
on V = {b1, b2, ..., bp} where (bi, bj ) is an edge if and only if |� bi, bj �| > τ
How should we choose τ ? We want the following properties to hold. Through
out this section we will let Si denote the support of xi.
(a) If (i, j) is an edge then Si ∩ Sj =... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
(cid:105)
and hence if k2µ < 1
and we choose τ = 1
we certainly satisfy condition (a) above.
2
2
Moreover it is not hard to see that if Si and Sj intersect then the probability that
is non-zero is at least 1 as well and this yields condition (b). However since
xi, xj
�
�
(cid:105)
(cid:104)
µ is roughly 1/ n for ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
19)(cid:19)
t2
IM I2
t
F IM I2
,
and let y be the vector which is the concatenation of xi restricted to Si and xj
restricted to Sj . Then yT M y = bi, bj and we can appeal to the above lemma to
bound the deviation of bi, bj from its expectation. In particular, trace(M ) = 0 and
IM I2
�
(cid:105)
�
(cid:105)
�
(c... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
b1, b2 and b3 that are a triangle in G. We know that
S1 ∩ S2, S1 ∩ S3 and S2 ∩ S3 are each non-empty but how do we decide
if S1 ∩ S2 ∩ S3 is non-empty too?
Alternatively, given three nodes where each pair belong to the same community,
how do we know whether or not all three belong to one community? The intuition
i... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
b4, bi) is an edge for i = 1, 2, 3
is at most
(cid:18)
O
k6
3
m
+
k3(a + b + c)
2
m
(cid:19)
Proof: We know that if (b4, bi) is an edge for i = 1, 2, 3 then S4 ∩ Si must be
non-empty for i = 1, 2, 3. We can break up this event into subcases:
(a) Either S4 intersects S1, S2 and S3 disjointly (i.e. it does not con... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
triple b1, b2, b3 we will be able
to determine whether or not S1 ∩ S2 ∩ S3 is empty with high probability by counting
how many other nodes b4 have an edge to all of them.
Now we are ready to give an algorithm for finding the communities.
5.3. OVERCOMPLETE DICTIONARIES
81
CommunityFinding [15]
Input: inte... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
last step. This algorithm outputs the correct communities with
high probability if k ≤ c min( n/µ log n, m2/5). In [15] the authors give a higher
order algorithm for community finding that works when k ≤ m
for any η > 0,
however the running time is a polynomial whose exponent depends on η.
1/2−η
√
�
(cid:104)
All t... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
few samples. If these
parameters are accurate, we can then cluster the samples and our error will be
nearly as accurate as the Bayes optimal classifier.
6.1 History
The problem of learning the parameters of a mixture of Gaussians dates back to the
famous statistician Karl Pearson (1894) who was interested in biolog... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
density function is given by:
N (µ, σ2) = √
1
2πσ2
exp
−(x − µ)2
2σ2
The density of a multidimensional Gaussian in Rn is given by:
N (µ, Σ) =
1
(2π)n/2det(Σ)1/2
exp
−(x − µ)�Σ−1(x − µ)
2
Here Σ is the covariance matrix. If Σ = In and µ = 00 then the distribution is just:
N (0, 1) × N (0, 1) × ... × N (0, 1). ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
a degree r + 1
polynomial (Pr) in the unknown parameters.
6.1. HISTORY
85
Image courtesy of Peter D. M. Macdonald. Used with permission.
Figure 6.1: A fit of a mixture of two univariate Gaussians to the Pearson’s data on
Naples crabs, created by Peter Macdonald using R
Pearson’s Sixth Moment Test: We can ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
closest to the empirical sixth moment
M6. This is called the sixth moment test.
W
0.580.600.620.640.660.680.7005101520
86
CHAPTER 6. GAUSSIAN MIXTURE MODELS
Expectation-Maximization
Much of modern statistics instead focuses on the maximum likelihood estimator,
which would choose to set the parameters to as to... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
ably compute the true parame
ters of a mixture of Gaussians, given a polynomial number of random samples. This
question was introduced in the seminal paper of Dasgupta [45], and the first gener
ation of algorithms focused on the case where the components of the mixture have
essentially no “overlap”. The next generat... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
in the separation depends on wmin, and we assume we know this parameter
(or a lower bound on it).
) where σmax
nσmax
Ω(
√
The basic idea behind the algorithm is to project the mixture onto log k di
mensions uniformly at random. This projection will preserve distances between each
pair of centers µi and µj with h... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
F2.
88
CHAPTER 6. GAUSSIAN MIXTURE MODELS
Claim 6.2.3 All of the vectors x − µ1, x ' − µ1, µ1 − µ2, y − µ2 are nearly orthogonal
(whp)
This claim is immediate since the vectors x − µ1, x ' − µ1, y − µ2 are uniform from a
sphere, and µ1 − µ2 is the only fixed vector. In fact, any set of vectors in which all
but o... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
separation condition that depends on k – the number of components. The idea is
that if we could project the mixture into the subspace T spanned by {µ1, . . . , µk},
we would preserve the separation between each pair of components but reduce the
ambient dimension.
So how can we find T , the subspace spanned by the me... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
a hyperplane that separates the mixture so that almost all
of one component is on one side, and almost all of the other component is on the
other side. [32] gave an algorithm that succeeds, provided there is such a separating
hyperplane, however the conditions are more complex to state for mixtures of more
than two... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
that x = y.
(cid:54)=
Claim 6.3.2 There is a coupling with error ε between F and G if and only if
dT V (F, G) ≤ ε.
Returning to the problem of clustering the samples from a mixture of two Gaussians,
we have that if dT V (F1, F2) = 1/2 then there is a coupling between F1 and F2
that agrees with probability 1/2. Hen... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
.g.
mixtures of two Gaussians). Our goal in improper density estimation is to find
any distribution FA so that dT V (F, FA) ≤ ε. This is the weakest goal for a learning
algorithm. A popular approach (especially in low dimension) is to construct a kernel
density estimate; suppose we take many samples from F and const... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
to be
a good estimate for F on a component-by-component basis. For example, our goal
specialized to the case of mixtures of two Gaussians is:
6.4. CLUSTERING-FREE ALGORITHMS
91
Definition 6.3.3 We will say that a mixture F = w1F1 + w2F2 is ε-close (on a
component-by-component basis) to F if there is a permutation... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
bounds for proper density estimation. We will give optimal algorithms
for parameter learning for mixtures of k Gaussians, which run in polynomial time
for any k = O(1). Moreover there are pairs of mixtures of k Gaussians F and F
A
not close on a component-by-component basis, but have dT V (F, F ) ≤ 2−k
that are
A
[... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
(cid:12)(cid:12)
, dT V (Fi, FAπ(i)) ≤ ε
(cid:12)
(cid:12) wi − wAπ(i) (cid:12)
(cid:12)
(cid:12)(cid:12)
(cid:12)
92
CHAPTER 6. GAUSSIAN MIXTURE MODELS
When can we hope to learn an ε close estimate in poly(n, 1/ε) samples? In
fact, there are two trivial cases where we cannot do this, but thes... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
i and dT V (Fi, Fj ) ≥ ε for each i = j,
then there is an efficient algorithm that learns an ε-close estimate F to F whose
running time and sample complexity are poly(n, 1/ε, log 1/δ) and succeeds with prob
ability 1 − δ.
(cid:54)=
A
Note that the degree of the polynomial depends polynomially on k. Kalai, Moitra
and... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
− wAπ(i)|, Iµi − µAπ(i)I, IΣi − ΣA π(i)IF ≤ ε
for all i. We will further assume that F is normalized appropriately:
6.4. CLUSTERING-FREE ALGORITHMS
93
Definition 6.4.3 A distribution F is in isotropic position if
(a) Ex←F [x] = 0
(b) Ex←F [xxT ] = I
Alternatively, we require that the mean of the distribution... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
describe the basic outline of the algorithm, although there will be many
details to fill:
(a) Consider a series of projections down to one dimension
(b) Run a univariate learning algorithm
(c) Set up a system of linear equations on the high-dimensional parameters, and
back solve
Isotropic Projection Lemma
We will... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
run it on projr[F ].
Problem 4 But what if dp(projr[F1], projr[F2]) is exponentially small?
This would be a problem since we would need to run our univariate algorithm with
exponentially fine precision just to see that there are two components and not one!
How can we get around this issue? In fact, this almost surel... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
MS
95
Note that this lemma is note true when F is not in isotropic position (e.g. consider
the parallel pancakes example), and moreover when generalizing to mixtures of k > 2
Gaussians this is the key step that fails since even if F is in isotropic position, it
could be that for almost all choices of r the project... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
directions. The key observation
is that as we vary r to s the parameters of the mixture vary continuously. See
Figure ??. Hence when we project onto r, we know from the isotropic projection
lemma that the two components will either have noticeably different means or vari
ances. Suppose their means are different by ε3... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
onto
these directions and pair up the components correctly. We can only hope to learn the
parameters on these projection up to some additive accuracy ε1 (and our univariate
learning algorithm will have running time and sample complexity poly(1/ε1)).
Problem 6 How do these errors in our univariate estimates translat... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
need to design a univariate algorithm, and next we return to
Pearson’s original problem!
6.5 A Univariate Algorithm
Here we will give a univariate algorithm to learning the parameters of a mixture of
two Gaussians up to additive accuracy ε whose running time and sample complexity
is poly(1/ε). Note that the mixtur... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
six moments of F (Θ) from
enough random examples, and output AΘ if its first six moments are each within an
additive τ of the observed moments. (This is a slight variant on Pearson’s sixth
moment test).
It is easy to see that if we take enough samples and set τ appropriately, then
if we round the true parameters Θ ... | https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.