text
stringlengths
16
3.88k
source
stringlengths
60
201
of Wo is easy via the Lyapunov equation WoA + A�Wo = −C �C SVD of a Hankel operator H can be expressed in terms of its Gramians: Let wi be the normalized eigenvectors of R = Wc WoWc , i.e. 1/2 1/2 Rwi = �iwi, �1 → �2 → . . . , �m > 0, �m+1 = 0 The SVD of H is given by H = m ⎦ k=1 � uk λk vk , 1/2 where λk = ...
https://ocw.mit.edu/courses/6-245-multivariable-control-systems-spring-2004/2abdecbaa80cd2d06ffa931ca7ca34c3_lec8_6245_2004.pdf
2.160 Identification, Estimation, and Learning Lecture Notes No. 2 February 13, 2006 2. Parameter Estimation for Deterministic Systems 2.1 Least Squares Estimation u 1 u 2 M mu Deterministic System w/parameter θ M y Linearly parameterized model Input-output y = u b 1 + u b 2 + K + b u 2 1 m m Param...
https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf
)]1 ( t u b 2 t u − 2) t y ) = ( ϕ (t ) = [ − 1) + ( ( ( ( Using an estimated parameter vector θ ˆ , we can write a predictor that predicts the output from inputs: T ˆ ˆ( t y θ ) = ϕ (t ) θ (2) 1 We evaluate the predictor’s performance by the squared error given by N1 VN (θ) = ∑( N t =1 Problem: Find th...
https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf
�� where P = ∑ (ϕ(t)ϕ (t))  t =1 N T B = ∑ t y )ϕ(t) ( t=1 −1    = (ΦΦ T ) −1 (8) (9) (10) 2.2 The Recursive Least-Squares Algorithm While the above algorithm is for batch processing of whole data, we often need to estimate parameters in real-time where data are coming from a dynamical system. 2 A...
https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf
T ) 3 B = ∑ t y )ϕ (t ) ( N t = 1 Three steps for obtaining a recursive computation algorithm a) Splitting Bt and Pt From (10) Bt = ∑ i y )ϕ (i ) = ∑ i y )ϕ (i ) + ( ( t y )ϕ (t ) ( t − 1 t i = 1 i = 1 Bt = Bt −1 + t y )ϕ (t ) ( From (11) t − 1 = ∑ (ϕ (i )ϕ (i )) P t T i = 1 P= t − 1 −1 P t − ...
https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf
t − 1 P ϕ (t )ϕ ( P t t ) t − 1 T T Pt − 1ϕ (t )ϕ ( P ) t t − 1 = ( 1 +ϕ ( P t ϕ (t )) ) T t − 1 (18) P t − 1 − Pt Therefore, P t − 1ϕ (t )ϕ ( P ) t t − 1 Pt = Pt − 1 − ( 1 +ϕ ( P t ϕ (t )) ) Note that no matrix inversion is needed for updating Pt ! This is a special case of the Matrix Inversion L...
https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf
Pt − 1ϕ (t ) = ( 1 +ϕ ( P T ) t − 1 ˆ T Replacing this by Κ t ∈ R m × 1 , we obtain (17) The Recursive Least Squares (RLS) Algorithm ˆ Pt − 1ϕ (t ) ˆ θ(t ) =θ (t − 1) + ( 1 +ϕ ( P T ) t − 1 ) t Pt − 1ϕ (t )ϕ ( P t − 1 Pt = Pt − 1 − ( 1 +ϕ ( P t ϕ (t )) ) t − 1 T T t ϕ (t ))( t y ) −ϕ (t )θ (t − 1)) ( ...
https://ocw.mit.edu/courses/2-160-identification-estimation-and-learning-spring-2006/2ada973a6a0171832bafbb05c177fcc9_lecture_2.pdf
3. Convex functions Convex Optimization — Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions convexity with respect to generalized inequalities • • • • • • 3–1 Definition f : Rn → R is conve...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
a, b R powers: xα on R++, for 0 α ≤ logarithm: log x on R++ ∈ 1 ≤ • • • • • • • • Convex functions 3–3 Examples on Rn and Rm×n affine functions are convex and concave; all norms are convex examples on Rn • norms: affine function f (x) = aT x + b �p = ( x • � examples on Rm×n (m n matrices) p)1/p fo...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
R with f (X) = log det X, dom f = S++ n g(t) = log det(X + tV ) = log det X + log det(I + tX −1/2V X −1/2) n = log det X + log(1 + tλi) � i=1 where λi are the eigenvalues of X −1/2V X −1/2 g is concave in t (for any choice of X 0, V ); hence f is concave ≻ Convex functions 3–5 Extended-value extension e...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
�x1 � , . . . , ∂f (x) ∂xn � exists at each x dom f ∈ 1st-order condition: differentiable f with convex domain is convex iff f (y) ≥ f (x) + ∇ f (x)T (y − x) for all x, y dom f ∈ f (y) f (x) + ∇f (x)T (y − x) (x, f (x)) first-order approximation of f is global underestimator Convex functions 3–7 Second-o...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
f (x) = Ax � b 2 2 � − f (x) = 2AT (Ax b), − ∇ 2f (x) = 2AT A ∇ convex (for any A) quadratic-over-linear: f (x, y) = x2/y 2f (x, y) = ∇ 2 3 y � y x − y x − � � T 0 � � ) y , x ( f 2 1 0 2 convex for y > 0 Convex functions 1 y 0 0 −2 x 2 3–9 log-sum-exp: f (x) = log n =1 exp xk is convex k ...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
of f : Rn R: → Cα = x { ∈ dom f f (x) α } ≤ | sublevel sets of convex functions are convex (converse is false) epigraph of f : Rn R: → epi f = (x, t) { ∈ Rn+1 x | ∈ dom f, f (x) t } ≤ epi f f f is convex if and only if epi f is a convex set Convex functions 3–11 Jensen’s inequality basic inequality: if...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
• • Convex functions 3–13 Positive weighted sum & composition with affine function nonnegative multiple: αf is convex if f is convex, α 0 ≥ sum: f1 + f2 convex if f1, f2 convex (extends to infinite sums, integrals) composition with affine function: f (Ax + b) is convex if f is convex examples log barrier for linea...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
is convex in x for each y , then ∈ A g(x) = sup f (x, y) y∈A is convex examples • • • support function of a set C: SC(x) = supy∈C yT x is convex distance to farthest point in a set C: f (x) = sup y∈C � x y � − maximum eigenvalue of symmetric matrix: for X Sn , ∈ λmax(X) = sup y T Xy kyk2=1 Convex fun...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
if gi convex, h convex, h˜ nondecreasing in each argument gi concave, h convex, h˜ nonincreasing in each argument proof (for n = 1, differentiable g, h) f ′′ (x) = g ′ (x)T 2h(g(x))g ′ (x) + h(g(x))T g ′′ (x) ∇ ∇ examples m i=1 � log log gi(x) is concave if gi are concave and positive m i=1 exp gi(x) is con...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
/t), dom g = (x, t) { x/t | ∈ dom f, t > 0 } g is convex if f is convex examples f (x) = xT x is convex; hence g(x, t) = xT x/t is convex for t > 0 negative logarithm f (x) = g(x, t) = t log t − t log x is convex on R2 ++ log x is convex; hence relative entropy − if f is convex, then g(x) = (c T x + d)f � ...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
= (1/2)x T Qx) − sup (y T x x 1 2 y T Q−1 y Convex functions 3–22 Quasiconvex functions f : Rn → R is quasiconvex if dom f is convex and the sublevel sets Sα = x { ∈ dom f f (x) α } ≤ | are convex for all α β α a b c f is quasiconcave if f is quasiconvex − f is quasilinear if it is quasiconvex and q...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
(to us if xi > 0) we assume x0 < 0 and x0 + x1 + + xn > 0 · · · present value of cash flow x, for interest rate r: PV(x, r) = n � i=0 (1 + r)−i xi internal rate of return is smallest interest rate for which PV(x, r) = 0: • • • • IRR(x) = inf r { 0 PV(x, r) = 0 } | ≥ IRR is quasiconcave: superlevel set ...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
)y) − ≥ f (x)θf (y)1−θ for 0 θ ≤ ≤ 1 f is log-convex if log f is convex • • • powers: xa on R++ is log-convex for a ≥ many common probability densities are log-concave, e.g., normal: ≤ 0, log-concave for a 0 f (x) = 1 (2π)n det Σ −1(x−x¯)T Σ−1(x−x¯) e 2 � cumulative Gaussian distribution function Φ is ...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
C ⊆ • Rn convex and y is a random variable with log-concave pdf then f (x) = prob(x + y C) ∈ is log-concave proof: write f (x) as integral of product of log-concave functions f (x) = � g(x + y)p(y) dy, g(u) = 1 u 0 u C C, ∈ �∈ � p is pdf of y Convex functions 3–29 example: yield function Y (x) = prob...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
X, i.e., �2 � z T (θX + (1 for X, Y Sm , 0 θ ≤ ≤ ∈ − 1 θ)Y )2 z ≤ θzT X 2 z + (1 θ)z T Y 2 z − therefore (θX + (1 θ)Y )2 − � θX 2 + (1 θ)Y 2 − Convex functions 3–31 MIT OpenCourseWare http://ocw.mit.edu 6.079 / 6.975 Introduction to Convex Optimization Fall 2009 For information about citing these ma...
https://ocw.mit.edu/courses/6-079-introduction-to-convex-optimization-fall-2009/2ae23d35685ff402473b36011138149a_MIT6_079F09_lec03.pdf
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.685 Electric Machines Class Notes 6: DC (Commutator) and Permanent Magnet Machines (cid:13)c 2005 James L. Kirtley Jr. 1 Introduction Virtually all electric machines, and all practical electric machines employ some form of...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
utator system. The interaction magnetic field is provided (in this picture) by a field winding. A permanent magnet field is applicable here, and we will have quite a lot more to say about such arrangements below. Now, if we assume that the interaction magnetic flux density averages Br, and if there are Ca conductors undern...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
of noting the apparent electric field within a moving object (as in the conductors in a DC machine): ~E′ ~ = E + ~v × B ~ Now, note that the armature conductors are moving through the magnetic field produced by the stator (field) poles, and we can ascribe to them an axially directed electric field: Ez = −RΩBr 2 rB dlr rv ...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
The two “break points” are at zero speed and at the “zero torque” speed: Ω0 = Va GIf 3 Ra + Va - + Ω G I f - Figure 3: DC Machine Equivalent Circuit Electrical Mechanical Figure 4: DC Machine Operating Regimes For 0 < Ω < Ω0, the machine is a motor: electric power in and mechanical power out are both positive. For ...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
: Two-Chopper, separately excited machine hookup + V - Figure 6: Series Connection not yield any meaningful ability to control speed and the simple applications to which it used to be used are handled by induction machines. Another connection which is still widely used is the series connection, in which the field win...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
RPM limit of AC motors, and this is one reason why they are so widely used: with the high rotational speeds it is possible to produce more power per unit mass (and more power per dollar). 1.3 Commutator: The commutator is what makes this machine work. The brush and commutator system of this class of motor involves quit...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
: Commutation Interpoles In larger machines the commutation process would involve too much sparking, which causes brush wear, noxious gases (ozone) that promote corrosion, etc. In these cases it is common to use separate commutation interpoles. These are separate, usually narrow or seemingly vestigal pole pieces which ...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
the same number of ampere-turns as the armature. Normally they have the same number of turns and are connected directly in series with the armature brushes. What they do is to almost exactly cancel the flux produced by the armature coils, leaving only the main flux produced by the field winding. One might think of these c...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
300 400 -0.4 -0.6 -0.8 -1 Kilo Am peres/Meter a l s e T Figure 11: Hysteresis Loop Of Ceramic Permanent Magnet Permanent magnet materials are, at core, just materials with very wide hysteresis loops. Fig- ure 11 is an example of something close to one of the more popular ceramic magnet materials. Note that this hystere...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
is 10−4 Tesla. And, finally, an Oersted is that field 9 Demagnetization Curve Energy Product Loci a l s e T , B Hc Br 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 -250 -200 -150 -100 -50 0 H, kA/m Figure 12: Demagnetization Curve intensity required to produce one Gauss in the permeability of free space. Since the perm...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
down in the air-gap. Thus we are following the same reference direction as we go around the Ampere’s Law loop. That becomes: ~H · d ℓ = Hmhm + Hgg ~ I Now, Gauss’ law could be written for either the upper or lower piece of the magnetic circuit. Assuming that the only substantive flux leaving or entering the magnetic cir...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
− ℘µ u 0 Figure 15: Load Line, Unit Permeance Analysis Permanent Magnet Magnetic Circuit, µ→∞ Figure 16: Surface Magnet Primitive Problem In the region of the magnet and the air-gap, Ampere’s Law and Gauss’ law can be written: ∇ · µ0 (cid:16) ~ Hm + M0 ~∇ × H = 0 = 0 (cid:17) ∇ · µ0 ~Hg = 0 ~ Now, if in the magnet the ...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
be made. We would produce the same air-gap flux density if we regard the permanent magnet as having a surface current around the periphery equal to the magnetization intensity. That is, if the surface current runs around the magnet: This would produce an MMF in the gap of: Kφ = M0 F = Kφhm and then since the magnetic fie...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
1.1. 2. The “reluctance factor” ff is cited as being about 1.2. We may further estimate the ratio of areas of the gap and magnet by: Ag A m = R + g 2 R + g + h m 2 Now, there are a bunch of approximations and hand wavings in this expression, but it seems to work, at least for the kind of machines contemplated. A second...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
coil-side waveforms that add with a slight phase shift. vc 0m 0 m Figure 19: Voltage Induced in a Coil If, on the other hand, the coil thrown is smaller than the magnet angle, the picture is the same, only the width of the pulses is that of the coil rather than the magnet. In either case the average voltage generated b...
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
or our Terms of Use, visit: http://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/2aec052d0c82b73a7b6c72d78f3a796d_MIT6_685F13_chapter6.pdf
Algorithmic Aspects of Machine Learning Ankur Moitra c© Draft date March 30, 2014 Algorithmic Aspects of Machine Learning ©2015 by Ankur Moitra. Note: These are unpolished, incomplete course notes. Developed for educational use at MIT and for publication through MIT OpenCourseware. Contents Contents Preface 1 I...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
. . . . . . . . . . . . . . . . 50 4 Sparse Recovery 53 4.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2 Uniqueness and Uncertainty Principles . . . . . . . . . . . . . . . . . 56 4.3 Pursuit Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 i ii CONT...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
. . . . . . . . . . . . . . . . . . . 89 6.4 Clustering-Free Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 91 6.5 A Univariate Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.6 A View from Algebraic Geometry . . . . . . . . . . . . . . . . . . . . 101 7 Matrix Completion 105 7.1 Bac...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
This course will be organized around algorithmic issues that arise in machine learn­ ing. The usual paradigm for algorithm design is to give an algorithm that succeeds on all possible inputs, but the difficulty is that almost all of the optimization problems that arise in modern machine learning are computationally int...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
matrix factorization (b) topic modeling (c) tensor decompositions (d) sparse recovery (e) dictionary learning (f) learning mixtures models (g) matrix completion Hopefully more sections will be added to this course over time, since there are a vast number of topics at the intersection of algorithms and machine l...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
read-off the best low rank approximation to M from it. Definition 2.1.1 The Frobenius norm of a matrix M is IM IF = ternately, if M = r T , IM IF = i=1 uiσivi 2 . σi M 2 Al- i,j. i,j Consider the following optimization problem: Let B be the best rank k ap­ proximation to M in the Frobenius norm - i.e. B is the mini...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
to M in the operator norm is also attained by B = k i=1 uiσivi , in which case the error is IM − BI2 = σk+1. T Let us give one more interpretation of the singular value decomposition. We can regard an m × n matrix M as a collection of n data points in Rm . We associate a distribution Δ with this set of points whic...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
to reduce M to bidiagonal form using Householder reflections, and then to compute the singular value decomposition from this representation using the QR algorithm. Next we will describe an application to text analysis. Applications to Text Analysis Latent Semantic Indexing: [49] Suppose we are give a large collect...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
compute a “similarity” score for document i and document j. We could do this by computing �Mi, Mj � where Mi is the ith column of M , etc. This function “counts” the number of words in common. In particular, given a query we would judge how similar a document is to it just be counting how many of its words occur i...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
sum to one. It is not hard to see that if D is a diagonal matrix where the ith entry is the reciprocal 2.1. INTRODUCTION 9 of the sum of the entries in the ith column of A then M = AA WW where AA = AD and WW = D−1W normalizes the data so that the columns of AA and of WW each sum to one. Hence we are finding a set...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
identifying interesting instances of the problem. The goal of this course is to not give up when faced with intractability, and to look for new explanations. These explanations could be new models (that avoid the aspects of the problem that allow us to embed hard problems) or could be identifying conditions under w...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
is much smaller than m or n and with this in mind we will instead ask: What is the complexity of this problem as a function of k? We will make use of tools from algebra to give a polynomial time algorithm for any k = O(1). In fact, the algorithm we present here will be nearly optimal in terms of its dependence on k...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
. It is easy to see that the columns of M are spanned by ⎫⎤⎡⎤⎡⎤⎡⎧ 12 1 1 ⎪⎪⎪⎬ ⎪⎪⎪⎨ 22 2 1 ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ , , . . . . . . . . ⎥ ⎢ ⎥ ⎥ ⎢ ⎢ ⎦ ⎣ ⎦ ⎦ ⎣ ⎣ . ⎪⎪⎪⎭ ⎪⎪⎪⎩ 2 n n 1 It is easy to see that rank(M ) = 3 However, M has zeros along the diagonal and non-zeros off it. Furthermore for any...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
1) ⎧ ⎪⎨ ⎪⎩ M = AW A ≥ 0 W ≥ 0 This system consists of quadratic equality constraints (one for each entry of M ), and linear constraints that A and W be entry-wise nonnegative. Before trying to design better algorithms for k = O(1), we should ask a more basic question (whose answer is not at all obvious): Que...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
decompositions. This line of work culminated in algorithms whose running time is exponential in the number of variables but is polynomial in all the other parameters of the problem (the number of polynomial inequalities, the maximum degree and the bit complexity of the coefficients). The running time is (nD)O(r) where...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
ic if there exist multivariate polynomials p1, ..., pn such that S = {x1, ..., xr|pi(x1, ..., xr) ≥ 0} or if S is a finite union or intersection of such sets. Definition 2.2.4 The projection of a semialgebraic set S is defined as projS (X1, ..., X�) = {x1, ..., x�|∃ x�+1, ..., xr such that p(x1, ..., xr) ∈ S} 2.2. ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
1, ..., xr|B(p1(x1, ..., xr), ..., pn(x1, ..., xr)) = true} is non-empty, and we assume that we can evaluate B (but not, say, that it has a succinct circuit). A related result is the famous Milnor-Warren bound (see e.g. [7]): Theorem 2.2.6 (Milnor-Warren) Given n polynomials p1, ..., pm of degree ≤ D on r variables...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
4 Can we find an alternate system of polynomial inequalities that ex­ presses the same decision problem but uses many fewer variables? We will focus on a special case called simplicial factorization where rank(M ) = k. In this case, we are asking whether or not rank+(M ) = rank(M ) = k and this simplifies matters beca...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
have made progress since this system also has nk + mk variables corresponding to the entries of A+ and W + . However consider the matrix A+M . If we represent A+ as an k × n matrix then we are describing its action on all vectors, but the crucial observation is that we only need to know how A+ acts on the columns o...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
nm)O(k2) and solves the simplicial factorization problem. The above approach is based on the paper of Arora et al [13] where the authors also give a variable reduction procedure for nonnegative matrix factorization (in the general case where A and W need not have full column or row rank respectively). The authors r...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
give an algorithm that runs in polynomial time (even for large values of r). Our discussion will revolve around the intermediate simplex problem. 16 CHAPTER 2. NONNEGATIVE MATRIX FACTORIZATION Intermediate Simplex Problem Let us define the intermediate simplex problem: We are given two polytopes Q and P with...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
(P0), intermediate simplex and the simplicial factorization problem are each polynomial time interreducible. It is easy to see that (P0) and the simplicial factorization problem are equivalent since in any two factorizations M = U V or M = AW (where the inner-dimension equals the rank of M ), the column spaces of M...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
xi. The description of the complete reduction, and the proof of its soundness are involved (see [115]). The trouble is that gadgets like those in Figure ?? are unstable. We can change the number of solutions by small perturbations to the problem. Motivated by issues of uniqueness and robustness, Donoho and Stodden ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
401k could be an anchor word for the topic personal finance. Why do anchor words help? It is easy to see that if A is separable, then the rows of W appear as rows of M (after scaling). Hence we just need to determine which rows of M correspond to anchor words. We know from our discussion in Section 2.3 that (if we ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
start. Hence the anchor words that are deleted are redundant and we could just as well do without them. Separable NMF [13] Input: matrix M ∈ Rn×m satisfying the conditions in Theorem 2.3.2 Output: A, W Run Find Anchors on M , let W be the output Solve for nonnegative A that minimizes IM − AW IF (convex programmin...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
will consider a related problem called topic modeling; see [28] for a compre­ hensive introduction. This problem is intimately related to nonnegative matrix fac­ torization, with two crucial differences. Again there is some factorization M = AW but now we do not get access to M but rather WM which is a very crude appro...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
be sparse while the latter is dense. Are there provable algorithms for topic modeling? WM . The Gram Matrix We will follow an approach of Arora, Ge and Moitra [14]. At first this seems like a fundamentally different problem than the ones we have considered because in this model we cannot ask for longer documents, w...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
] as: r P[w1 = a, w2 = b|t1 = i, t2 = j]P[t1 = i, t2 = j] i,j and the lemma is now immediate. • The key observation is that G has a separable nonnegative matrix factorization given by A and RAT since A is separable and the latter matrix is nonnegative. Indeed if RAT has full row rank then the algorithm in Theore...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
2 = w) = P(word1 = w ' |word2 = anchor(t))P(t2 = t|w2 = w) t which we can think of a linear systems in the variables {P(t2 = t|w2 = w)}. It is not hard to see that if R has full rank then it has a unique solution. Finally, we compute the probabilities we were originally interested in by Bayes’ rule: P(word w|topi...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Run Find Anchors Solve for P(topic t|word w) and use Bayes’ rule to compute A End Experiments We are faced with a basic scientific question now: Are there really anchor words? The following experiment was conducted in [12]: 2.4. TOPIC MODELS 23 (a) Run MALLET (a popular topic modeling toolkit) on a collection...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
regard T as a collection of p matrices of size m × n that are stacked on top of each other. We can generalize many of the standard definitions from linear algebra to the tensor setting, however we caution the reader that while these parameters are easy to compute for matrices, most parameters of a tensor are hard to...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
unit vectors) u1, u2 that ∈ R1000 M ≈ u1v T + u2v T 2 1 This is called factor analysis, and his results somewhat confirmed his hypothesis. But there is a fundamental obstacle to this type of approach that is often referred to as the “Rotation Problem”. Set U = [u1, u2] and V = [v1, v2] and let O be an orthogonal ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the same M ). However if we are given a tensor r r xi ⊗ yi ⊗ wi T = i=1 then there are general conditions (namely if {xi}i, {yi}i and {wi}i are each linearly independent) not only is the true factorization the unique factorization of T with rank r but in fact there are simple algorithms to find it! This is preci...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
imations do not necessarily share any common rank one factors. In fact, subtracting the best rank one approximation to a tensor T from it can actually increase its rank. (c) For a real-valued matrix its rank over R and over C are the same, but this is false for tensors. There are real-valued tensors whose minimum ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
imation of T . (cid:0) ⊗ 1 1/n 1 1/n 1 1/n (cid:0) (cid:1) (cid:1) One last issue is that it is easy to see that a random n × n × n tensor will have rank Ω(n2), but it is unknown how to explicitly construct any order three tensor whose rank is Ω(n1+ε). And any such construction would give the first super-linear ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
so many times) to phylogenetic reconstruction [96], topic modeling [8] and community detection [9]. This decomposition also plays a crucial role in learning mixtures of spherical Gaussians [75] and independent com­ ponent analysis [36], although we will instead present a local search algorithm for the latter problem...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
of Ta(Tb)+ and Tb(Ta)+ are U and V respectively (after rescaling) Proof: We can use the above formula for Ta and Tb and compute Ta(Tb)+ = U DaDb +U + D+ Then almost surely over the choice of a and b we have that the diagonals of Da b will be distinct – this is where we use the condition that each pair of vectors ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
⊗ vi i That is the Khatri-Rao product of U and V of size m × r and n × r is an mn × r matrix whose ith column is the tensor product of the ith column of U and the ith column of V . The following lemma we leave as an exercise to the reader: Lemma 3.1.7 If U and V are size m × r and n × r and have full column rank a...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
close to the true factors. In later sections, we will simply assume we are given the true tensor T and what we present here is what justifies this simplification. This section is somewhat technical, and the reader can feel free to skip it. Recall that the main step in Theorem 3.1.3 is to compute an eigendecompo­ sitio...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
x? We have xA = M −1Ab = x + M −1e = x + M −1(Ab − b). So Ix − xAI ≤ 1 σmin(M ) Ib − AbI. Since M x = b, we also have IbI ≤ σmax(M )IxI. It follows that Ix − xAI IxI ≤ σmax(M ) Ib − AbI σmin(M ) IbI = κ(M ) Ib − AbI . IbI In other words, the condition number controls the relative error when solving a line...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the previous section: is M diagonalizable? Consider W U −1 WM U = D + U −1EU. The proof that WM is diagonalizable proceeds as follows: Part (a) Since M and U −1 WM U are similar matrices, they have the same set W of eigenvalues. Part (b) Moreover we can apply Theorem 3.2.2 to U −1M U W = D + U −1EU and if U is ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
eigenvalues of M have sufficient separation. It remains to check that uAi ≈ ui. Let = r cj uj = uAi. j Left-multiplying both sides of the equation above by W M , Recall that W = M + E. M we get r cj λj uj + EuAi = λAiuAi. =⇒ r cj (λj − λAi)uj = −EuAi. j j T be the jth row of U −1 . Left-multiplying both side...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
and T represents its true moments. Also TAa = TA(∗, ∗, a) → Ta and similarly for b. We leave it as an exercise to the reader to check that TAb + under natural T + → Ta b . We have already established that if conditions. It follows that E → 0, then the eigendecompositions of M and M + E converge. Finally we concl...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Open Question 1 Is there an efficient algorithm for tensor decompositions under any natural conditions, for r = (1 + ε)n for any ε > 0? For example, it is natural to consider a smoothed analysis model for tensor decompo­ sition [26] where the factors of T are perturbed and hence not adversarially chosen. The above uni...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
u. Alternatively, we can think of s(·) : V → Σ as a random function that assigns states to vertices where the marginal distribution on s(r) is πr and P uv = P(s(v) = j|s(u) = i), ij Note that s(v) is independent of s(t) conditioned on s(u) whenever the (unique) shortest path from v to t in the tree passes through u...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
, Steel, Szekely, and Warnow [57]. And from this, we can apply tensor methods to find the transition matrices following the approach of Chang [36] and later Mossel and Roch [96]. Finding the Topology The basic idea here is to define an appropriate distance function [109] on the edges of the tree, so that we can appr...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the topology. Fix four leaves a, b, c, and d, and there are exactly three possible induced topologies between these leaves, given in Figure 3.1. (Here by induced topology, we mean delete edges not on any shortest path between any pair of the four leaves, and contract paths to a single edge if possi­ ble). Our goal i...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Handling Noise Note that we can only approximate F ab from our samples. This translates into a good approximation of ψab when a and b are close, but is noisy when a and b are far away. The approach in [57] of Erdos, Steel, Szekely, and Warnow is to only use quartets where all of the distances are short. Finding th...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
also called Chang’s lemma [36]). In [96], Mossel and Roch use this approach to find the transition matrices of a phylogenetic tree, given the tree topology, as follows. Let us assume that u and v are internal nodes and that w is a leaf. Furthermore suppose that v lies on the shortest path between u and w. The basic...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the above algorithm, we need that the transition matrices and the observation matrices are full-rank. More precisely, we require that the transition matrices are invertible and that the observation matrices whose row space correspond to a hidden node and whose column space correspond to the output symbols each have...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
value bj = χS (Xj ) with probably 2/3 and otherwise bj = 1 − χS (Xj ). The challenge is that we do not know which labels have been flipped. Claim 3.3.2 There is a brute-force algorithm that solves the noisy parity problem using O(n log n) samples Proof: For each T , calculate χT (Xj )bj over the samples. Indeed χT ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
the state of the ith internal node where si = χSi (X). We can define the following transition matrices: if i + 1 ∈ S P i+1,i = if i + 1 ∈/ S P i+1,i = (0, si) (1, si + 1 mod 2) ⎧ 1 ⎪⎨ 2 1 2 ⎪⎩ 0 otherwise ⎧ 1 (0, si) ⎪⎨ 2 1 . (1, si) 2 ⎪⎩ 0 otherwise At each internal node we observe xi and at th...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
each of which would lead to a different optimization problem e.g. sparsest cut or k-densest subgaph. However each of these optimization problems is N P -hard, and even worse are hard to approximate. Instead, we will formulate our problem in an average-case model where there is an underlying community structure that ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
graph G with high probability. If q − p is smaller, then it is not even information theoretically possible to find π. Indeed, we should also require that each part of the partition is large, and for simplicity we will assume that k = O(1) and |{u|π(u) = i}| = Ω(n). There has been a long line of work on partitioning ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
according to πu and πv respectively, and if they choose the same community there is an edge with probability q and otherwise there is an edge with probability p. i=j πuπj i vq + πuπi i (cid:80) i (cid:80) Recall that in order to apply tensor decomposition methods what we really need are conditionally independent...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
, b ∈ B and c ∈ C; then Ta,b,c is exactly the probability that a random node x ∈ X is connected to a, b and c. # (cid:54) (cid:54) 3.4. COMMUNITY DETECTION 43 This is immediate from the definitions above. In particular if we look at whether (x, a), (x, b) and (x, c) are edges in G, these are conditionally in...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
in A, B and C and hence the factors {(ΠR)A}i, {(ΠR)B }i, and {(ΠR)B }i will be non-negligibly far from linearly dependent. i i i Part (c) Note that if we have a good approximation to {(ΠR)A}i then we can partition A into communities. In turn, if A is large enough then we can extend this partitioning to the whole g...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
pure topic models (see [10]). Recall that there is an unknown topic matrix A and we obtain samples from the following model: (a) Choose topic i for document j with probability pi (b) Choose Nj words according to the distribution Ai If each document has at least three words, we can define the tensor TA where TAa,b,c...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
Finally, in our application to phylogenetic reconstruction, each hidden node was in one and only one state. Note however that in the context of topic models, it is much more realistic to assume that each document is itself a mixture of topics and we will refer to these as mixed models. Latent Dirichlet Allocation ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
bj ⊗ ck T = i,j,k where D is r1 × r2 × r3. We call D the core tensor. 46 CHAPTER 3. TENSOR METHODS This is different than the standard definition for a tensor decomposition where we only summed over i = j = k. The good news is that computing a Tucker decom­ position of a tensor is easy. Indeed we can always set...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
­ ability that the first three words in a random document are a, b and c respectively. But we could just as well consider alternative experiments. The three experiments we will need in order to given a tensor spectral algorithm for LDA are: (a) Choose three documents at random, and look at the first word of each docu­...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf
= (cid:80) i,j,k Di,j,kAi ⊗ Aj ⊗ Ak Proof: Let w1 denote the first word and let t1 denote the topic of w1 (and similarly for the other words). We can expand P[w1 = a, w2 = b, w3 = c] as: r P[w1 = a, w2 = b, w3 = c|t1 = i, t2 = j, t3 = k]P[t1 = i, t2 = j, t3 = k] i,j,k and the lemma is now immediate. • (cid:54)= ...
https://ocw.mit.edu/courses/18-409-algorithmic-aspects-of-machine-learning-spring-2015/2af5365a3f0d24cc2ee9f787bbab14e9_MIT18_409S15_bookex.pdf