text
stringlengths
16
3.88k
source
stringlengths
60
201
right hand side to recover the topic model, provided that A has full column rank. In fact, we can compute α0 from our samples (see [8]) but we will focus instead on proving the above identity. 48 CHAPTER 3. TENSOR METHODS Moments of the Dirichlet The main identity that we would like to establish is jus...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
third ball is color k. The probably that the first ball is color i is α i and since we place it back with one more of its own color, the probability that the second ball αk is color i as well is . α0+2 It is easy to check the above formulas in the other cases too. . And the probability that the third ball is color ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
, the i, i, i numerator of M ⊗ µ is 2(αi + 1). The case that requires some care is: αi Claim 3.5.9 If i = k, Ri,i,k = 0 (cid:54)= The reason this case is tricky is because the terms M ⊗ µ(all three ways) do not all count the same. If we think of µ along the third dimension of the tensor then the ith topic occurs ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
polynomial time algorithm to learn a topic matrix AA that is t close to the true A in a Latent Dirichlet Allocation model, provided we are given at least poly(n, 1/t, 1/σr, 1/αmin) documents of length at least thee, where n is the size of the vocabulary and σr is the smallest singular value of A and αmin is the sma...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
phones and N conversations going on in an room. Each microphone hears a superposition of the conversations given by the corresponding rows of A. If we think of the conversations as independent and memoryless, can we disentangle them? Such problems are also often referred to as blind source separation. We will follo...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
approach was to consider higher order tensors. This time we will proceed in a different manner. Since M > 0 we can find B such that M = BBT . How are B and A related? In fact, we can write BBT = AAT ⇒ B−1AAT (B−1)T = I and this implies that B−1A is orthogonal since a square matrix times its own trans­ pose is the id...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
�v�2=1 What are its local minima? � � E (v T x)4 = E r (vixi)4 + 6 r (vixi)2(vj xj )2 = r v 4 i = i E(x 4 i ) + 6 r i i v 2 v 2 j + 3 r ij v 4 i − 3 r r v 4 i + 3( v 2 i ) ij i r v 4 i = i i (cid:0) i − 3 + 3 (cid:1) E x 4 i 52 CHAPTER 3. TENSOR METHODS Hence the local ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the themselves the sum of the cumulants of Xi and Xj . This is precisely the property we exploited here. Chapter 4 Sparse Recovery In this chapter we will study algorithms for sparse recovery: given a matrix A and a vector b that is a sparse linear combination of its columns – i.e. Ax = b and x is sparse – when ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
is to prove that finding the sparsest solution to a linear system is hard. We will begin with the related problem: Problem 1 (P) Find the sparsest non-zero vector x in a given subspace S Khachiyan [81] proved that this problem is N P -hard, and this result has many interesting applications that we will discuss later...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
determinant of the matrix whose columns are {Γw(αi)}i∈I . Then the proof is based on the following observations: (cid:0) (cid:1) (a) The determinant is a polynomial in the variables αi with total degree n 2 + 1, which can be seen by writing the determinant in terms of its Laplace expansion (see e.g. [74]). (b) M...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Definition 4.1.3 A set of m vectors in Rn is in general position if every set of at most n vectors is linearly independent. From the above reduction we get that it is hard to decide whether a set of m vectors in Rn is in general position or not (since there is an I with |I| = n whose submatrix is singular if and onl...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
to Ax = 0. • −i −i 56 CHAPTER 4. SPARSE RECOVERY 4.2 Uniqueness and Uncertainty Principles Incoherence Here we will define the notion of an incoherent matrix A, and prove that if x is sparse enough then it is the uniquely sparsest solution to Ax = b. Definition 4.2.1 The columns of A ∈ Rn×m are µ-incoherent ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
where I ∈ Rn×n is the identity matrix and D ∈ Rn×n is the DFT matrix. (i−1)(j−1) In particular, Dij = w √ . This is often referred to as the n spikes-and-sines matrix. It is not hard to see that µ = √1 n where w = e here. i 2π n Uncertainty Principles The important point is that if A is incoherent, then if x i...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
entry of V T U has absolute value at most µ(A) and so |βT (V T U )α| ≤ µ(A)IαI1IβI1. Using Cauchy-Schwarz it follows that IαI1 ≤ IαI0IαI2 and thus (cid:112) 2 (cid:112) IbI2 2 ≤ µ(A) IαI0IβI0IαI2IβI2 (cid:112) 1 Rearranging, we have µ( A) we get µ 2 ≤ IαI0 + IβI0 and this completes the proof. • ≤ IαI0IβI0. Fin...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
= IαyI0+IβyI0 ≥ µ 1 and so xA has strictly more non-zeros It is easy to see that IxAI0 ≥ IyI0 − IxI0 > µ than x does, and this completes the proof. • Indeed, a similar statement is true even if A is an arbitrary incoherent matrix (in­ stead of a union of two orthonormal bases). We will discuss this extension furthe...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
∈ ker(A). However IyI0 ≥ r + 1 because every set of r columns of A is linearly independent, by assumption. Then IxAI0 ≥ IyI0 − IxI0 ≥ r/2 + 1 and so xA has strictly more non-zeros than x does, and this completes the proof. • In fact, if A is incoherent we can lower bound its Kruskal rank (and so the proof in the pr...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
two. 1 Corollary 4.2.7 Suppose A is µ-incoherent. If Ax = b and IxI0 < 2 , then x is µ the uniquely sparsest solution. There are a number of algorithms that recover x up to the uniqueness threshold in the above corollary, and we will cover one such algorithm next. 4.3. PURSUIT ALGORITHMS 59 4.3 Pursuit Algori...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
algorithm selects is in T . (b) Each index j gets chosen at most once. These two properties immediately imply that orthogonal matching pursuit recovers the true solution x, because the residual error r will be non-zero until S = T , and moreover the linear system AT xT = b has a unique solution (since otherwise x ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
columns of A |yk ' +1| = 0, |yk ' +2| = 0, . . . , |ym| = 0. where k ' ≤ k. Hence supp(y) = {1, 2, . . . , k ' } ⊆ T . Then to ensure that we pick j ∈ T , a sufficient condition is that (4.1) � −1 (cid:96) |� A1, r � −1 (cid:96) �| > |� Ai, r �| for all i ≥ k ' + 1. We can lower-bound the left-hand side of (4.1...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) k ' r � =1 (cid:96) y A , Ai ≤ |y1| (cid:96) � (cid:96) � (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) k ' r |� A , Ai �| ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
uniqueness. However in the case where A = [U, V ] and U and V are orthogonal, we proved a uniqueness result that is better by a factor of two. There is no known algorithm that matches the best known uniqueness bound there, although there are better algorithms than the one above (see e.g. [55]). Matching Pursuit We...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
(cid:18) Fa,b = exp 1 √ n i2π(a − 1)(b − 1) n We can write ω = ei2π/n, and then the first row is √ [1, 1, . . . , 1]; the second row is √1 1 n [1, ω, ω2 , . . .], etc. n We will make use of following basic properties of F : 62 CHAPTER 4. SPARSE RECOVERY (a) F is orthonormal: F H F = F F H , where ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
in the Fourier representation: Corollary 4.4.3 Let z = c ∗ x; then zA = Ac 8 xA, where 8 indicates coordinate-wise multiplication. Proof: We can write z = M cx = F H diag(Ac)F x = F H diag(Ac)xA = F H (Ac 8 xA), and this completes the proof. • We introduce the following helper polynomial, in order to describe Pro...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
, the zeros of vA correspond roots of p, and hence non-zeros of x. Conversely, the non-zeros of vA correspond to zeros of x. We conclude that x 8 vA = 0, and so: Corollary 4.4.7 M xv = 0 x Proof: We can apply Claim 4.4.2 to rewrite x 8 vA = 0 as xA ∗ v = A0 = 0, and this implies the corollary. • Let us write out t...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
first 2k values of the DFT of x. However consider the k × k + 1 submatrix 64 CHAPTER 4. SPARSE RECOVERY whose top left value is xAk+1 and whose bottom right value is xAk. This matrix only involves the values that we do know! Consider ⎡ xk−1 A xk A . . . ⎢ ⎣ x2k−1 x2k−1 A A . . . x1 A . . . xk A ⎡ ⎤ ⎢ ⎥...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
system to find the non-zero values of x. 4.5 Compressed Sensing that has an almost linear Here we will give stable algorithms for recovering a signal (in the number of rows of the sensing matrix) number of non-zeros. Recall that the Kruskal rank of the columns of A is what determines how many non-zeros we can allo...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Note that we will note require that w is k sparse. However if x is exactly k sparse, then any w satisfying the above condition must be exactly equal to x and hence this new recovery goal subsumes our exact recovery goals from previous lectures (and is indeed much stronger). The natural (but intractable) approach is...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
will make use of certain geometric properties of Γ (that hold almost surely) in order to prove that basis pursuit works: 66 CHAPTER 4. SPARSE RECOVERY Definition 4.5.6 Γ ⊆ Rn is an almost Euclidean subsection if for all v ∈ Γ, 1 √ IvI1 ≤ IvI2 ≤ √ IvI1 n C n Note that the inequality √1 IvI1 ≤ IvI2 holds for all...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
S = n/C 2 . Claim 4.5.7 Let v ∈ Γ, then either v = 0 or |supp(v)| ≥ S. Proof: IvI1 = r j∈supp(v) |vj | ≤ (cid:112) |supp(v)| · IvI2 ≤ (cid:112) C |supp(v)| √ n IvI1 where the last inequality uses the property that Γ is almost Euclidean. The last inequality implies the claim. • 4.5. COMPRESSED SENSI...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Λ denote the restriction of v to coordinates in Λ. Similarly let vΛ denote the restriction of v to Λ. ¯ Claim 4.5.9 Suppose v ∈ Γ and Λ ⊆ [n] and |Λ| < S/16. Then IvΛI1 < IvI1 4 IvΛI1 ≤ (cid:112) |Λ|IvΛI2 ≤ (cid:112) C |Λ|√ n IvI1 Proof: • Hence not only do vectors in Γ have a linear number of non-zeros, b...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
This implies the lemma. • Hence we can use almost Euclidean subsections to get exact sparse recovery up to (cid:16) IxI0 = S/16 = Ω(n/C 2) = Ω (cid:17) n log n/m Next we will consider stable recovery. Our main theorem is: Theorem 4.5.11 Let Γ = ker(A) be an almost Euclidean subspace with parameter n C. Let S =...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
≤ Ix − wI1 + 2σ S (x) . 16 1 2 This completes the proof. • Notice that in the above argument, it was the geometric properties of Γ which played the main role. There are a number of proofs that basis pursuit works, but the advantage of the one we presented here is that it clarifies the connection between the clas...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
weaker guarantees, such as ∀0 = v ∈ Γ, supp(v) = Ω(n), but these do not suffice for compressed sensing since we also require that the f1 weight is spread out too. (cid:54)= Chapter 5 Dictionary Learning In this chapter we will study dictionary learning, where we are given many examples b1, b2, ..., bp and we wou...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
has full column rank (b) A is incoherent 71 72 (c) A is RIP CHAPTER 5. DICTIONARY LEARNING We will present an algorithm of Spielman, Wang and Wright [108] that succeeds (under reasonable distributional assumptions) if A is full rank, and if each bi is a n) columns in A. Next, we will give an algorithm linear ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
and compression. Dictionary learning is also used in the design of some deep learning architectures. Popular approaches to solving this problem in practice are variants of the standard alternating minimization approach. Suppose the pairs (xi, bi) are collected into the columns of matrices X and B respectively. Then...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
. Then an important rationale for preferring prov­ able algorithms here is that if the examples are generated from an easy dictionary, our algorithms will really learn an easy dictionary (but for the above heuristics this need not be the case). 5.2 Full Rank Dictionaries Here we present a recent algorithm of Spielm...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
not independent. Outline The basic idea is to consider the row space of B which we will denote by R = {wT B}. Note that A−1B = X and hence the rows of X are contained in R. The crucial observation is: Observation 5.2.2 The rows of X are the sparsest non-zero vectors in R. Of course finding the sparsest non-zero ve...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
We will prove that the solution to (Q1) is sparse, in particular supp(z) ⊆ supp(c). • And for sparse z we will have that IzT XI1 ≈ IzI1 (after an appropriate scaling). Hence we can instead analyze the linear program: (Q1 ' ) min IzI1 s.t. c z = 1 T Note that |supp(z)| = 1 if c has a coordinate that is strictly th...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
5.2.3 If R is a set of rows and C is a set of columns, let XC submatrix of X that is the intersection of those rows and columns. R be the Let S be the set of columns of X that have a non-zero entry in J. That is S = {j|X J = 00}. We now compute: (cid:54)= j T Iz∗ XI1 = Iz∗ XS I1 + Iz∗ XS I1 T T T ≥ Iz XS I1 − Iz...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
an ε-net of all possible z1’s and conclude that T XI1 < (5.1) holds for all z1’s. This in turn implies that if z1 is non-zero then Iz0 T XI1 but this is a contradiction since we assumed that z∗ is an optimal solution Iz∗ to (Q1). We conclude that z1 is zero and so supp(z∗) ⊆ supp(c), as desired. Step 2 We wish t...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
5.3. OVERCOMPLETE DICTIONARIES 77 Step 3 We can now put everything together. Since c = xi, when we solve (P 1) we will get the ith row of X up to scaling. If we solve (P 1) repeatedly, we will find all rows of X (and can delete duplicates since now two rows of X will be scaled copies of each other). Finally we ca...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
above model. The running time and sample complexity are poly(n, m). √ Recall that methods like K-SVD rely on the intuition that if we have the true dictionary A, we can find X and if we have X we can find a good approximation to A. However the trouble in analyzing these approaches is that they start from a dictionar...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
it should be reasonably likely that there is also an edge. We can directly compute the above graph given our examples and the basic idea is that we can hope to learn the support of X by finding the communities. The key point is that this departs from standard clustering problems precisely because each sample xi has...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
sub-Gaussian random variables, which satisfy E[Xi] = 0 and V ar[Xi] = 1. 5.3. OVERCOMPLETE DICTIONARIES 79 Let M be a symmetric n × n matrix. Then, for every t ≥ 0 we have: (cid:18) P[|x T M x − trace(M )| > t] ≤ 2 exp −c min (cid:18) Let Si and Sj be disjoint. Set N = (AT A)S i S j � M = 0 1 N T 2 1 N ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
would need to solve sparse recovery even if we knew A. 3 Community Finding Consider the communities Cj = {bi|Si � j}. Then for each pair b1, b2 ∈ Cj there is an edge (b1, b2) with probability at least 1 , and moreover our intersection graph can 2 be covered by m dense communities {Cj }j . We will introduce the b...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
if i ∈ S1 ∩ S2 ∩ S3, then the probability that S4 contains i is k/m. Next we prove the upper bound. Let a = |S1 ∩ S2|, b = |S1 ∩ S3| and c = |S2 ∩ S3|. Then: Lemma 5.3.5 If S1 ∩S2 ∩S3 = ∅ the probability that (b4, bi) is an edge for i = 1, 2, 3 is at most (cid:18) O k6 3 m + k3(a + b + c) 2 m (cid:19) Proof: We ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
is easy to see that with high probability for any i, j, |Si ∩ Sj | = O(1) and hence we want (roughly) k m >> k6 3 m + k3 2 m which is true if k < m2/5 . When this holds, for every triple b1, b2, b3 we will be able to determine whether or not S1 ∩ S2 ∩ S3 is empty with high probability by counting how many othe...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
is not an identifying pair but rather S1 ∩ S2 has more than one element we would have instead C1,2 = ∪j∈S1∩S2 Cj in which case the set C1,2 will be deleted in the last step. This algorithm outputs the correct communities with high probability if k ≤ c min( n/µ log n, m2/5). In [15] the authors give a higher­ order a...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
a mixture of k Gaussians, we would like to give an efficient algorithm to learn its parameters using few samples. If these parameters are accurate, we can then cluster the samples and our error will be nearly as accurate as the Bayes optimal classifier. 6.1 History The problem of learning the parameters of a mixture ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Gaussians. Recall that for a univariate Gaussian we have that its density function is given by: N (µ, σ2) = √ 1 2πσ2 exp −(x − µ)2 2σ2 The density of a multidimensional Gaussian in Rn is given by: N (µ, Σ) = 1 (2π)n/2det(Σ)1/2 exp −(x − µ)�Σ−1(x − µ) 2 Here Σ is the covariance matrix. If Σ = In and µ = 00 the...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
1 polynomial (Pr) in the unknown parameters. 6.1. HISTORY 85 Image courtesy of Peter D. M. Macdonald. Used with permission. Figure 6.1: A fit of a mixture of two univariate Gaussians to the Pearson’s data on Naples crabs, created by Peter Macdonald using R Pearson’s Sixth Moment Test: We can estimate Ex←F...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
0.580.600.620.640.660.680.7005101520 86 CHAPTER 6. GAUSSIAN MIXTURE MODELS Expectation-Maximization Much of modern statistics instead focuses on the maximum likelihood estimator, which would choose to set the parameters to as to maximize the probability that the mixture would generate the observed samples. Unf...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
no “overlap”. The next generation algorithms are based on algebraic ideas, and avoid clustering altogether. Before we proceed, we will discuss some of the counter-intuitive properties of high-dimensional Gaussians. To simplify the discussion, we will focus on spherical Gaussians N (µ, σ2I) in Rn . Fact 6.2.1 The m...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
j with high probability, but will contract distances between samples from the same component and make each component closer to spherical, thus making it easier to cluster. We can then cluster all of the samples into which component generated them, and then for each cluster we can choose the empirical mean and empir...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
In fact, any set of vectors in which all but one is uniformly random from a sphere are nearly orthogonal. Now we can compute: Ix − x ' I2 ≈ Ix − µ1I2 + Iµ1 − x ≈ 2nσ2 ± 2σ2 n log n (cid:112) ' I2 And similarly: Ix − yI2 ≈ Ix − µ1I2 + Iµ1 − µ2I2 + Iµ2 − yI2 ≈ 2nσ2 + Iµ1 − µ2I2 ± 2σ2 n log n (cid:112) A Hence if ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
σ2I. Let x ∼ F be a random sample from the mixture, then we can write x = c + z where z ∼ N (0, σ2In) and c is a random vector that takes the value µi with probability wi for each i ∈ [k]. So: E[xx T ] = E[cc T ] + E[zz T ] = k r wiµiµi + σ2In � (cid:62) Hence the top left singular vectors of E[xxT ] whose singu...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
[45], [53], [18], [117], [1], [32] have focused on clustering; can we give efficient learning algorithms even when clustering is im­ possible? Consider a mixture of two Gaussians F = w1F1 + w2F2. The separation conditions we have considered so far each imply that dT V (F1, F2) = 1 − o(1). In particular, the components...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Choose (x, y) from the best coupling between F1 and F2 (b) If x = y, output x 90 CHAPTER 6. GAUSSIAN MIXTURE MODELS (c) Else output x with probability w1, and otherwise output y This procedure generates a random sample from F , but for half of the samples we did not need to decide which component generated it ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
is smooth, we would need to take an exponential number of samples in order to guarantee that FA = G ∗ N (0, σ2I) is close to F . (b) Proper Density Estimation Here, our goal is to find a distribution FA ∈ C where dT V (F, FA) ≤ ε. Note that if C is the set of mixtures of two Gaussians, then a kernel density estimate...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
be close as mixtures too: dT V (F, F ) ≤ 4ε. However we can have mixtures F and F that are both mixtures of k Gaussians, A are close as distributions basis. It is better to learn F on a component-by-component basis than to do only proper density estimation, if we can. Note that if FA is ε-close to F , then even wh...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
.4 Clustering-Free Algorithms Recall, our goal is to learn FA that is ε-close to F . In fact, the same definition can be generalized to mixtures of k Gaussians: 6.4.1 Definition component-by-component basis) to F if there is {1, 2, ..., k} so that for all i ∈ {1, 2, ..., k}: We will say that a mixture F = (cid:80...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
learn wi or wj because Fi and Fj entirely overlap. Again, we need a quantitive lower bound on dT V (Fi, Fj ), say dT V (Fi, Fj ) ≥ ε for each i = j so that if we take a reasonable number of samples we will get at least one sample from the non-overlap region between various pairs of components. (cid:54)= Theorem 6....
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the parameters be additively close which is the goal in [23]. The benefit is that the algorithm in [23] works for more general learning problems in the one-dimensional setting, and we will describe this algorithm in detail at the end of this chapter. Throughout this section, we will focus on the k = 2 case since thi...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
set y = B−1(x − µ), and it is easy to see that E[y] = 0 and E[yyT ] = B−1M (B−1)T = I. • Our goal is to learn an additive ε approximation to F , and we will assume that F has been pre-processed so that it is in isotropic position. Outline We can now describe the basic outline of the algorithm, although there will ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
some notation: Definition 6.4.6 dp(N (µ1, σ2), N (µ2, σ2)) = |µ1 − µ2| + |σ2 − σ2 2| 1 1 2 We will refer to this as the parameter distance. Ultimately, we will give a univariate algorithm for learning mixtures of Gaussians and we would like to run it on projr[F ]. Problem 4 But what if dp(projr[F1], projr[F2]) is e...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
position and wi ≥ ε and dT V (F1, F2) ≥ ε, then with high probability for a random r dp(proj [F1], proj [F2]) ≥ 2ε3 = poly(1/n, ε) r r 6.4. CLUSTERING-FREE ALGORITHMS 95 Note that this lemma is note true when F is not in isotropic position (e.g. consider the parallel pancakes example), and moreover when generaliz...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
to solve for the parameters of F1, but when we project F onto different directions (say, r and s) we need to pair up the components from these two directions. The key observation is that as we vary r to s the parameters of the mixture vary continuously. See Figure ??. Hence when we project onto r, we know from the i...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
we encounter the final problem in the high-dimensional case: Suppose we choose r randomly and for s1, s2, ...., sp we learn the parameters of the projection of F onto these directions and pair up the components correctly. We can only hope to learn the parameters on these projection up to some additive accuracy ε1 (an...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
polynomial number of mixtures to learn an ε-close estimate FA! But we still need to design a univariate algorithm, and next we return to Pearson’s original problem! 6.5 A Univariate Algorithm Here we will give a univariate algorithm to learning the parameters of a mixture of two Gaussians up to additive accuracy ε...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the first six moments of F (Θ) from enough random examples, and output AΘ if its first six moments are each within an additive τ of the observed moments. (This is a slight variant on Pearson’s sixth moment test). It is easy to see that if we take enough samples and set τ appropriately, then if we round the true para...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
2τ Mr( A Mr Mr (cid:12) (cid:124) (cid:12) (cid:12) (cid:12) (cid:125) (cid:12) (cid:12) (cid:12) (cid:125) (cid:12) (cid:12) (cid:12) (cid:124) (cid:12) (cid:12) (cid:12) (cid:123)(cid:122) ≤τ (cid:123)(cid:122) ≤τ 98 CHAPTER 6. GAUSSIAN MIXTURE MODELS ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
In fact, let us consider the following thought experiment. Let f (x) = F (x) − F (x) be the point-wise difference between the density functions F and F . Then, A the heart of the problem is: Can we prove that f (x) crosses the x-axis at most six times? See Figure 6.2. A Lemma 6.5.3 If f (x) crosses the x-axis at mos...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
to prove is that F (x) − FA(x) has at most six zero crossings. Let us prove a stronger lemma by induction: 2, x) be a linear combination of k Gaus­ Lemma 6.5.4 Let f (x) = sians (αi can be negative). Then if f (x) is not identically zero, f (x) has at most 2k − 2 zero crossings. k =1 αiN (µi, σi i (cid:80) We wi...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
first six moments of a mixture of two Gaussians exactly, then we would know its parameters exactly too. Let us prove the above lemma by induction, and assume that for any linear combination of k = 3 Gaussians, the number of zero crossings is 100 CHAPTER 6. GAUSSIAN MIXTURE ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
this last step does not increase the number of zero crossings! See Figure 6.3(d). This proves that (cid:110) Mr( A Θ) = Mr (cid:111) (Θ) , r = 1, 2, ..., 6 has only two solutions (the true parameters and we can also interchange which is component is which). In fact, this system of polynomial equations is also sta...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
X defined as ! n converges in a neighborhood of zero, it uniquely determines the probability distribu­ tion, i.e. (cid:80) ∀r, Mr(Θ) = Mr Θ) ⇒ F (Θ) = F (Θ)A . ( A = Our goal is to show that for any polynomial family, a finite number of its moments suffice. First we introduce the relevant definitions: Definition 6.6.3 G...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
tion converges in a neighborhood of zero, there exists N such that F (Θ) = F (Θ)A if and only if Mr(Θ) = Mr Θ) ∀r ∈ 1, 2, · · · , N ( A Proof: Let Qr(Θ, Θ) = Mr(Θ) − Mr Θ). Let I1 = Q1 , I2 = Q1, Q2 , · · · . ( A This is our ascending chain of ideals in R[Θ, Θ]. We can invoke Hilbert’s basis A (cid:104) � (cid:105)...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
contradiction, but more fundamentally it is not possible to give a bound on N that depends only on the choice of the ring. Consider the following example Example 1 Consider the Noetherian ring R[x]. Let Ii = xN −i for i = 0, · · · , N . It is a strictly ascending chain of ideals for i = 0, · · · , N . Therefore, ev...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
: Definition 6.6.7 A set S is semi-algebraic if there exist multivariate polynomials p1, ..., pn such that S = {x1, ..., xr|pi(x1, ..., xr) ≥ 0} or if S is a finite union or intersection of such sets. 104 CHAPTER 6. GAUSSIAN MIXTURE MODELS Theorem 6.6.8 (Tarski) The projection of a semi-algebraic set is semi-...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
a fixed δ we can choose ε to be strictly greater than zero and moreover the polynomial relationship between ε and δ only holds if δ is sufficiently small. However these technical issues can be resolved without much more work, see [23] and the main result is the following. • Corollary 6.6.10 If |Mr(Θ) − Mr Θ)| ≤ ( A ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
some of the entries of M to fill in the rest of M . Let us be more precise: There is an unknown matrix M ∈ Rn×m whose rows represent users and whose columns represent movies in the example above. For each (i, j) ∈ Ω ⊆ [n] × [m] we are given the value Mi,j . Our goal is to recover M exactly. Ideally, we would like to...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
it would be more believable if the probability we observe Mi,j depends on the value itself. Alternatively, a user should be more likely to rate a movie if he actually liked it. In order to understand the second assumption, suppose Ω is indeed uniformly random. Consider � (cid:21) � (cid:20) M = Π Ir 0 where Π i...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
1.2 Suppose Ω is chosen uniformly at random. Then there is a poly­ nomial time algorithm to recover M exactly that succeeds with high probability if m ≥ max(µ1 2 , µ0)r(n + m) log2(n + m) 7.2. NUCLEAR NORM 107 The algorithm in the theorem above is based on a convex relaxation for the rank of a matrix called the n...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the basis for our algorithms for matrix completion. We will follow a parallel outline to that of compressed sensing. In particular, a natural starting point is the optimization problem: (P 0) min rank(X) s.t. Xi,j = Mi,j for all (i, j) ∈ Ω This optimization problem is N P -hard. If σ(X) is the vector of singular v...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
X, B = product. � (cid:104) (cid:105) (cid:80) � i,j Xi,j Bi,j = trace(X T B) denote the matrix inner- Lemma 7.2.3 IXI∗ = max B �≤1 X, B . � (cid:105) � (cid:104) � (cid:107) To get a feel for this, consider the special case where we restrict X and B to be diagonal. Moreover let X = diag(x) and B = diag(b). Then I...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
, B , and the other direction is not much more difficult (see e.g. [74]). • � (cid:105) � (cid:104) � (cid:107) How can we show that the solution to (P 1) is M ? Our basic approach will be a proof by contradiction. Suppose not, then the solution is M + Z for some Z that is supported in Ω. Our goal will be to construc...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
. ur+1, . . . , un is an arbitrary orthonormal basis of U ⊥ . Similarly choose vr+1, . . . , vn so that v1, . . . , vn form an orthonormal basis for all of Rn . We will be interested in the following linear spaces over matrices: Definition 7.2.4 T = span{uivT | 1 ≤ i ≤ r or 1 ≤ j ≤ r or both}. j Then T ⊥ = span{uivT...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the helper matrix Y and prove that if such a Y exists, then M is the solution to (P 1). We require that Y is supported in Ω and (a) IPT (Y ) − U V T IF ≤ (cid:112) r/8n 110 CHAPTER 7. MATRIX COMPLETION (b) IPT ⊥ (Y )I ≤ 1/2. We want to prove that for any Z supported in Ω, IM + ZI∗ > IM I∗. Recall, we want t...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
:107) where in the last line we used the fact that M is orthogonal to U⊥V⊥ the fact that Y and Z have disjoint supports we can conclude: T . Now using IM + ZI∗ ≥ IM I∗ + Z, U V T + U⊥V⊥ (cid:104) � T − Y (cid:105) � Therefore in order to prove the main result in this section it suffices to prove that Z, U V T + U⊥V...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
[Z] and hence U⊥V⊥ T , PT ⊥ [Z] = IPT ⊥ [Z]I∗. (cid:105) � (cid:104) � Now we can invoke the properties of Y that we have assumed in this section, to prove a lower bound on the right hand side. By property (a) of Y , we have that IPT (Y ) − U V T IF ≤ . Therefore, we know that the first term PT (Z), U V T − r � PT...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
IPT ⊥ (Z)I∗ > IPT (Z)IF to n 2 complete the proof we started in the previous section. We will make use of an approach introduced by Gross [67] and we will follow the proof of Recht in [103] where the strategy is to construct Y iteratively. In each phase, we will invoke concentration results for matrix valued rand...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
following toy problem. Let uk be a random unit vector in Rd and let Xk = ukuk T . Then it is easy to see that ρ2 = 1/d. How many trials do we need so that is close to the identity (after scaling)? We should expect to need Θ(d log d) trials; this is even true if uk is drawn uniformly at random from the standard bas...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
framework of the matrix Bernstein inequality, but for a full proof see [103]. Note that E[PT RΩPT ] = PT E[RΩ]PT = m RΩPT does not deviate too far from its 2 PT and so we just need to show that PT n expectation. Let e1, e2, . . . , ed be the standard basis vectors. Then we can expand: PT (Z) = r �(cid:104)P T (...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
7.3. QUANTUM GOLFING 113 Lemma 7.3.5 If Ω is chosen uniformly at random and m ≥ nr log n then with high probability for any Z supported in Ω we have IPT ⊥ (Z)I∗ > (cid:114) r 2n IPT (Z)IF Proof: Using Lemma 7.3.3 and the definition of the operator norm (see the remark) we have m 2 2n Furthermore we can upper b...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
24n 2 IPT (Z)IF All that remains is to prove that the helper matrix Y that we made use of actually does exists (with high probability). Recall that we require that Y is sup­ ported in Ω and IPT (Y ) − U V T IF ≤ r/8n and IPT ⊥ (Y )I ≤ 1/2. The basic idea is to break up Ω into disjoint sets Ω1, Ω2, . . . Ωp, where ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
It is easy to see that Y = of the hole all i. Hence we can compute PT (Yi) − U V T IF = I ≤ � (cid:13) (cid:13)� (cid:13) Wi 1 − P (cid:13)(cid:13)� − � (cid:13) n2 m �(cid:13) (cid:13)�(cid:13) PT RΩPT � (cid:13) (cid:13) � (cid:13)(cid:13) (cid:13) � T RΩi Wi−1 (cid:13) = �(cid:13) (cid:13) � F n2 m � (cid:13)...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Y )I ≤ IP T ⊥(Yi) I i comes from the first term. For the full details see [103]. This completes the proof that computing the solution to the convex program indeed finds M exactly, provided that M is incoherent and |Ω| ≥ max(µ2 1, µ0)r(n + m) log2(n + m). Bibliography [1] D. Achlioptas and F. McSherry. O...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
. [7] Noga Alon. Tools from Higher Algebra. In Handbook of Combinatorics, pages 1749–1783, 1996. [8] A. Anandkumar, D. Foster, D. Hsu, S. Kakade, Y. Liu. A spectral algorithm for latent dirichlet allocation. In NIPS, pages 926–934, 2012. [9] A. Anandkumar, R. Ge, D. Hsu and S. Kakade. A tensor spectral approach t...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
– provably In STOC, pages 145–162, 2012. [14] S. Arora, R. Ge and A. Moitra. Learning topic models - going beyond SVD. In FOCS, pages 1–10, 2012. [15] S. Arora, R. Ge and A. Moitra. New algorithms for learning incoherent and overcomplete dictionaries. arxiv:1308.6273, 2013 [16] S. Arora, R. Ge, A. Moitra and S. Sa...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
] M. Belkin and K. Sinha. Toward learning gaussian mixtures with arbitrary separation. In COLT, pages 407–419, 2010. [23] M. Belkin and K. Sinha. Polynomial learning of distribution families. In FOCS, pages 103–112, 2010. [24] Q. Berthet and P. Rigollet. Complexity theoretic lower bounds for sparse prin­ cipal comp...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf