text
stringlengths
16
3.88k
source
stringlengths
60
201
to show that the cardinality of this set is the same as the cardinality of the set that interests us). In this lecture, this set might be uncountable. Therefore, we need to introduce a metric on this set so that we can treat the close points in the same manner. To this end we will define covering numbers (which basicall...
https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/5ebb42429b252cbd2f711cd03e01b97f_MIT18_657F15_L6.pdf
the empirical l1 distance as to be ℓ F F ◦ F dx 1(f, g) = 1 n n i =1 X f (x ) i | . g(xi) | − Theorem: If 0 f ≤ ≤ 1 for all f ∈ F , then for any x = (x1, . . . , xn), we have ˆRx n( F ) inf ε≥0 ≤ ε + (cid:8) 2 log (2N ( F n r x 1 , ε)) , d . (cid:9) Proof. Fix x = (x1, . . . , xn) and ε > 0. Let V be a minimal ε-net of...
https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/5ebb42429b252cbd2f711cd03e01b97f_MIT18_657F15_L6.pdf
inf ε + ε≥0 (cid:8) 2 log(2N ( F n r , dx 1 , ε)) . (cid:9) The previous bound clearly establishes a trade-off because as ε decreases N ( creases. F , dx 1 , ε) in- 5.2.2 Computing Covering Numbers As a warm-up, we will compute the covering number of the ℓ2 ball of radius 1 in dR denoted ε )d. There are several techniqu...
https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/5ebb42429b252cbd2f711cd03e01b97f_MIT18_657F15_L6.pdf
2) , ε 2 V | | ≤ (cid:0) 1 + ε d 2 d ε 2 (cid:1) = 2 ε (cid:18) d + 1 (cid:19) ≤ (cid:18) 3 d ε (cid:19) . (cid:0) (cid:1) 6 For any p 1, define ≥ and for p = , define ∞ dx p(f, g) = f (xi) | − g(x ) p i | ! 1 p , 1 n n i=1 X dx ∞(f, g) = max i f (xi) | − . g(xi) | Using the previous theorem, in order to bound with dx n...
https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/5ebb42429b252cbd2f711cd03e01b97f_MIT18_657F15_L6.pdf
(f, dx . Using H¨older’s inequality with r = q ∞, ε) ⊆ p ≤ ≤ ∞ ≥ 1 we obtain p ≥ N (f, dp, ε). Now suppose that 1 p z i | p | ! 1 n n i=1 X 1 −n p ≤ n (1 − 1 1 ) r p n 1 ! i 1 = X zi | pr | ! i=1 X 1 pr = 1 q . zi | q | ! 1 n n i =1 X This inequality yeilds B(f, dx q , ε) = g : dx { q (f, g) ε } ⊆ ≤ B(f, dx p, ε), whic...
https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/5ebb42429b252cbd2f711cd03e01b97f_MIT18_657F15_L6.pdf
MIT OpenCourseWare http://ocw.mit.edu 6.641 Electromagnetic Fields, Forces, and Motion, Spring 2005 Please use the following citation format: Markus Zahn, 6.641 Electromagnetic Fields, Forces, and Motion, Spring 2005. (Massachusetts Institute of Technology: MIT OpenCourseWare). http://ocw.mit.edu (accessed MM DD, ...
https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/5ee93bfd5358d2f6a8da82ad8874c029_lecture2.pdf
, z dxdy ) 3' 1 3 Φ ≈ ∆ ∆ ∆ ⎧ ⎪ x y z ⎨ ⎪ ⎩ ( ⎡A x, y, z ⎣ x ) ( - ∆x, y, z − A x x ∆x )⎤ ⎦ + ⎣ y ( ⎡A x, y + ∆y, z y ( − A x, y, z )⎦ ⎤ ) ∆y + ⎡ A x, y, z + z ⎣ z ( z ∆ −) A x, y, z ( ⎤ ⎫ ⎦ ⎪ ⎬ ∆z ⎪⎭ ) ≈ ∆V ⎢ ⎡ ∂Ax ⎣ ∂x + ∂Ay ∂y + ∂Az ⎤ ⎥ ∂z ⎦ (cid:118)∫ A dS i = div A = lim S V 0 ∆V ...
https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/5ee93bfd5358d2f6a8da82ad8874c029_lecture2.pdf
(cid:118)∫ A dS = ∑ (cid:118)∫ i A dS i i S i=1 N→∞ dS i N = lim ∑ (∇ i A) ∆Vi N→∞ V 0 ∆ → n i=1 = ∫ ∇ i A V dV ∫V ∇ i A dV = (cid:118)∫ S A i da 3. Gauss’ Law in Differential Form (cid:118)∫ ε0 E i da = ∫ ∇ i (ε0E dV = ) ∫ ρ dV S V V ∇ i ε0E = ρ ) ( µ H i da = ∇ i µ H dV = 0 ( 0 ) ∫ V 0 S ...
https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/5ee93bfd5358d2f6a8da82ad8874c029_lecture2.pdf
∂x ⎦ ⎥ + i − z ⎢ ⎡ ∂Ay ⎣ ∂x - ∂A ⎤ x ⎥ ∂y ⎦ = det ⎢ − ⎡ − − ⎤ i z ⎥ i y ⎢ i x ∂ ⎥ ⎢ ∂ ∂ ⎥ ∂z ⎥ ⎢ ∂x ∂y ⎥ ⎢ ⎢Ax Ay Az ⎥ ⎦ ⎣ = ∇ × A 6.641, Electromagnetic Fields, Forces, and Motion Prof. Markus Zahn Lecture 2 Page 6 of 10 2. Stokes’ Integral Theorem Courtesy of Krieger Publishing. Use...
https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/5ee93bfd5358d2f6a8da82ad8874c029_lecture2.pdf
i da S ∇ × H = J + ε0 ∂E ∂t III. Applications to Maxwell’s Equations 1. Vector Identity lim A i ds = 0 = ∇ × A i da = ∇ i ∇ × A dV C 0→ ∫ (cid:118)∫ ( ) ) C V (cid:118)∫ ( S ∇ i (∇ × A ) = 0 2. Charge Conservation ⎧ ⎪ ∇ × H = J + ε0 ∇ i ⎨ ⎪ ⎩ ⎫ ∂E ⎪ ⎬ ∂t ⎪⎭ 0 = ∇ i ⎡ ⎢J + ε0 ⎢ ⎣ ⎤ ∂E ⎥ ∂t ⎥ ...
https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/5ee93bfd5358d2f6a8da82ad8874c029_lecture2.pdf
⎡ J + ε0 ∇ i ⎢ ⎢⎣ ⎤ E ∂ ⎥ t ∂ ⎥⎦ = 0 MQS Limit ∂H ∇ × E = −µ 0 ∂t ∇ i E = −∇ i ∇Φ = −∇ Φ = 2 ( ) (Poisson’s Eq.) ∇ × H = J ρ ε 0 Φ x, y, z = ) ( ∫∫∫ x ',y ',z ' 4πε0 ⎣ ρ (x ', y ', z ') dx ' dy ' dz ' + (y − y ')2 ⎡(x − x ')2 1 + (z − z ')2 ⎤ ⎦ 2 ∇ µ0 i ( H) = 0 ⇒ µ H = ∇ × A 0 ∇ 2 A = − µ 0 ...
https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/5ee93bfd5358d2f6a8da82ad8874c029_lecture2.pdf
3.46 PHOTONIC MATERIALS AND DEVICES Lecture 1: Optical Materials Design Part 1 Lecture Notes Goal: To develop principles for optical materials design. Approach: Physical basis of properties; use properties in design. Electromagnetic Field Apply voltage: E = ( ,r t ) Apply current: H = ( ,r t ) K K K K Maxwe...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/5efbe006f817e26278e1d29903eed191_3_46lec1_optmat1.pdf
ittivity of medium ε ε0 = dielectric constant (static) ε ε 0 11.7 16 43 3600 Si Ge LiNbO3 BaTiO3 Static: ν = 0 3.46 Photonic Materials and Devices Prof. Lionel C. Kimerling Lecture 1: Optical Materials Design Part 1 Page 2 of 6 Notes Lecture 2...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/5efbe006f817e26278e1d29903eed191_3_46lec1_optmat1.pdf
σλ mW nm Power Spectral Bandwidth Fiber dB/km α σ τ /L ns/km L km Attenuation Response Time Length Detector photons/bit Sensitivity Data Rate bits/s n0 B0 3.46 Photonic Materials and Devices Prof. Lionel C. Kimerling Lecture 1: Optical Materials Design Part 1 Page 4 of 6 ...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/5efbe006f817e26278e1d29903eed191_3_46lec1_optmat1.pdf
1 dt 2 dt 3 accel. vel. x (position) Linear differential equation Resonances Driven simple harmonic oscillators K 2d P dt 2 K K 2P + ω ε χ E 2 0 K dP dt = −σ − ω0 0 0 K K K ) = ε χ ν )E P = N ex 0 ( ( Dipole movement # charges/unit volume K P = ε 0 ⎤ ⎡ K χ ω 2 ⎥ ⎢ 0 0 E ⎢ (ω − ω ) − jσω ⎥ 2 ...
https://ocw.mit.edu/courses/3-46-photonic-materials-and-devices-spring-2006/5efbe006f817e26278e1d29903eed191_3_46lec1_optmat1.pdf
Ten Lectures and Forty-Two Open Problems in the Mathematics of Data Science Afonso S. Bandeira December, 2015 Preface These are notes from a course I gave at MIT on the Fall of 2015 entitled: “18.S096: Topics in Mathematics of Data Science”. These notes are not in final form and will be continuously edited and/or correc...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . 0.2.2 Matrix AM-GM inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.3 Brief Review of some linear algebra tools . . . . . . . . . . . . . . . . . . . . . . . . . . Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectral ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 A related open problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Spike Models and BBP transition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . 2.2.3 A simple example . . . . . . . . . . . . . . . 2.2.4 2.3 Semi-supervised learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 An interesting experience and the Sobolev Embedding Theorem . . . . . . . . Similar non-linear dimensional reduction techniques 3 Spectral C...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . 4 Concentration Inequalities, Scalar and Matrix Versions 4.1.1 4.1 Large Deviation Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sums of independent random variables . . . . . . . . . . . . . . . . . . . . . . . 4.2 Gaussian Concentratio...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Optimality of matrix concentration result for gaussian series . . . . . . . . . . . . . . . 63 63 63 63 64 66 4.5.1 An interesting observation regarding random matrices with independent matrices 68 69 69 70 75 75 76 77 . . . . . . . . . . . . . . . . 4.6.1 A...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . 5.2 Gordon’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Gordon’s Escape Through a Mesh Theorem . . . . . . . . . . . . . . . . . . . . 5.2.2 Proof of Gordon’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Equiangular Tight Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3 The Paley ETF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 The Kadison-Singer problem . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
3.2 The deletion channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 8 Approximation Algorithms and Max-Cut 108 8.1 The Max-Cut problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 8.2 Can αGW be improved? . . . . . . . . . . . . . . . . . . . . . . . . . . . . ....
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . 119 9.4 Exact recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 9.5 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 9.6 The analysis . . . . . . . . . . . . . . . . . . . . . . . . . . ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. . . . . . . . . . . . . . . . . . . . . . 129 9.13 Another conjectured instance of tightness Some preliminary definitions 9.6.1 10 Synchronization Problems and Alignment 131 10.1 Synchronization-type problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 10.2 Angular Synchronization . . . . . . . ....
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
[MS15]. • 2.1: Ramsey numbers • 2.2: Erdos-Hajnal Conjecture • 2.3: Planted Clique Problems • 3.1: Optimality of Cheeger’s inequality • 3.2: Certifying positive-semidefiniteness • 3.3: Multy-way Cheeger’s inequality • 4.1: Non-commutative Khintchine improvement • 4.2: Latala-Riemer-Schutt Problem • 4.3: Matrix Six devia...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
Positive PCA tightness • 10.1: Angular Synchronization via Projected Power Method • 10.2: Sharp tightness of the Angular Synchronization SDP • 10.3: Tightness of the Multireference Alignment SDP • 10.4: Consistency and sample complexity of Multireference Alignment 0.2 A couple of Open Problems We start with a couple of...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
the fol- lowing is true: (a) and (b) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) 1 n! (cid:88) n (cid:89) Aσ(j) σ∈Sym(n) j=1 1 n! (cid:88) σ∈Sym(n) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) n (cid:89) Aσ(j) j=1 (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:1...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
fly review a few linear algebra tools that will be important during the course. If you need a refresh on any of these concepts, I recommend taking a look at [HJ85] and/or [Gol96]. 0.3.1 Singular Value Decomposition The Singular Value Decomposition (SVD) is one of the most useful tools for this course! Given a matrix M ∈...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
| (cid:107)M (cid:107) = max |λk(M ) . k 0.3.3 Trace and norm Given a matrix M ∈ Rn×n, its trace is given by Tr(M ) = n (cid:88) k=1 Mkk = n (cid:88) k=1 λk (M ) . Its Frobeniues norm is given by (cid:107)M (cid:107)F = (cid:115)(cid:88) ij M 2 ij = Tr(M T M ) A particularly important property of the trace is that: Tr(...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
that (2) is maximized by taking v1, . . . , vd to be the k leading eigenvectors of M and that its value is simply the sum of the k largest eigenvalues of M . The nice consequence of this is that the solution to (2) can be computed sequentially: we can first solve for d = 1, computing v1, then v2, and so on. Remark 0.2 A...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
of unknown variables from pair- wise ratios on compact groups. 11. Some extra material may be added, depending on time available. 0.4 Open Problems A couple of open problems will be presented at the end of most lectures. They won’t necessarily be the most important problems in the field (although some will be rather imp...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
as and its sample covariance as µn = n 1 (cid:88) n k=1 xk, Σn = 1 − 1 n n (cid:88) k=1 xk ( − µn) (xk − Tµ n) . (4) (5) Remark 1.1 If x1, . . . , xn are independently sampled from a distribution, µn and Σn are unbiased estimators for, respectively, the mean and covariance of the distribution. We will start with the fir...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
��es (cid:33) xk − nµ∗ − V (cid:32) n (cid:88) k=1 (cid:32) n (cid:88) (cid:33) βk = 0. k=1 Because (cid:80)n k=1 βk = 0 we have that the optimal µ is given by µ∗ = n1 (cid:88) n k=1 xk = µn, the sample mean. We can then proceed on finding the solution for (9) by solving n (cid:88) k=1 min V, βk V T V =I (cid:107)x − k ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
:13) 2 (cid:13) = (xk 2 T − µn) (xk − µn) Tµn) V V T −2 (xk − T + (x − µ ) V V T V V T Tµn) (xk − µn) (x k (cid:1) (cid:0) n k = (xk − µ − n) (xk − µn) Since (xk − Tµn) (xk − µn) does not depend on V , minimizing (9) is equivalent to − (xk − Tµn) V V T (xk − µn) . max V T V =I n (cid:88) k=1 (xk − µn) V V T (xk − µn) ....
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
rst show that interpretation (2) of finding the d-dimensional projection of x1, . . . , xn that preserves the most variance also arrives to the optimization problem (13). 1.1.2 PCA as d-dimensional projection that preserves the most variance We aim to find an orthonormal basis v1, . . . , v d (organized as V = [v1, . . ....
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
(13) and that the two interpretations of PCA are indeed equivalent. 1.1.3 Finding the Principal Components When given a dataset x1, . . . , xn ∈ Rp, in order to compute the Principal Components one needs to find the leading eigenvectors of Σn = 1 − 1 n n (cid:88) k=1 ( xk − n) (xk − µn) . µ T A naive way of doing this w...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
:1) time are randomized algorithms that compute an approximate solution in 1 (see for example [HMT09, RST09, MM15]). T 1.1.4 Which d should we pick? Given a dataset, if the objective is to visualize it then picking d = 2 or d = 3 might make the most sense. However, PCA is useful for many other purposes, for example: (1...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
any orthonormal basis V = [v1, . . . , vp] of Rp, consider the following random variable ΓV : Given a draw of the random vector g, ΓV is the squared (cid:96)2 norm of the largest projection of g on a subspace generated by d elements of the basis V . The question is: What is the basis V for which E [ΓV ] is maximized? 2...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
) (cid:0) vT i g i∈S  2 (cid:1)   , where g ∼ N (0, Σ). The observation regarding the different ordering of the steps amounts to saying that the eigenbasis of Σ is the optimal solution for  (cid:34) argmax max E (cid:88) (cid:0)vT V ∈Rp× p V T V =I   S⊂[p] | =d S| ∈S i i g(cid:1)2  (cid:35)  .  1.2 PCA in hi...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
For simplicity we will instead try to understand the spectral properties of 1 Sn = XX T . n Since x ∼ N (0, Σ) we know that µn → 0 (and, clearly, n essentially the same as Σn.3 n−1 → 1) the spectral properties of Sn will be Let us start by looking into a simple example, Σ = I. In that case, the distribution has no low ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
is plotted as the red line in the figure above. − Remark 1.2 We will not show the proof of the Marchenko-Pastur Theorem here (you can see, for example, [Bai99] for several different proofs of it), but an approach to a proof is using the so-called moment method. The core of the idea is to note that one can compute moments...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
(n) defined over the complex numbers, meaning simply that each entry of X is an iid complex valued standard gaussian CN (0, 1) the reverse inequality is conjectured for all n ≥ 1: Notice that the singular values of √1 X are simply the square roots of the eigenvalues of Sn, n αC(n + 1) ≤ αC(n). (cid:19) (cid:18) 1 √ X n ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
+ βvvT , for v a unit norm vector and β ≥ 0. One way to think about this instance is as each data point x consisting of a signal part where g0 is a one-dimensional standard gaussian (a gaussian multiple of a fixed vector noise part g ∼ N (0, I) (independent of g0. Then x = g + βg0v is a gaussian random variable √ √ √ βg...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
nice papers about this and similar phenomena, including [Pau, Joh01, BBAP05, Pau07, BS05, Kar05, BGN11, BGN12].5 In what follows we will find the critical value of β and estimate the location of the largest eigenvalue of Sn. While the argument we will use can be made precise (and is borrowed from [Pau]) we will In short...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
1 ∈ R, denote, respectively, an eigenvalue and associated eigenvector for Sn. By the definition of eigenvalue and eigenvector we have √ 1 (cid:20) (1 + β)ZT 1 + βZT n √ 1 Z1 2 Z1 1 Z2 1 + βZT ZT 2 Z2 (cid:21) (cid:20) (cid:21) v1 v2 ˆ= λ (cid:21) (cid:20) v1 v2 which can be rewritten as 1 n (1 + β)ZT 1 1 Z1v1 + (cid:112...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
= ˆ λv1 If v1 = 0 (again, not properly justified here, see [Pau]) then this means that ˆ λ = (1 + β)ZT 1 n 1 1 Z1 + (cid:112)1 + βZT n 1 Z2 (cid:18) 1 ˆ λ I − ZT n 2 Z2 (cid:19) − 1 1 n (cid:112)1 + βZT 2 Z1 (18) First observation is that because Z1 ∈ Rn has standard gaussian entries then 1 ZT n 1 Z1 → 1, meaning that (...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
−1 1 n nU D V 1/2 T (cid:17) T (cid:21) Z1 T Z T (cid:1) 1 D1/ 2 V T (cid:16) ˆ λ I −V DV T (cid:17) − 1 (cid:0) V D1/2 U T Z1 (cid:1) (cid:21) = (1 + β) + T (cid:1) 1 D1/2V T (cid:16) V (cid:104) ˆ λ I −D (cid:105) V T (cid:17) − 1 D1 (cid:0) /2 U T V Z1 (cid:21) (cid:1) U T Z = (1 + β) (cid:20) 1 + (cid:0)U T Z (cid:...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
16)(cid:104)ˆ  1 n p −1 (cid:88) j=1 g2 Djj j ˆλ − Djj   = (1 + β) 1 +  Because we expect the diagonal entries of D to be distributed according to the Marchenko-Pastur distribution and g to be independent to it we expect that (again, not properly justified here, see [Pau]) p−1 1 (cid:88) p − 1 j=1 g2 D j j ˆ j λ − D...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
γ β > γ+. Another important question is wether the leading eigenvector actually correlates with the planted perturbation (in this case e1). Turns out that very similar techniques can answer this question as well [Pau] and show that the leading eigenvector vmax of Sn will be non-trivially correlated with e1 if and only ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
λmax (cid:18) 1 n √ W + ξvvT (cid:19) 1 → ξ + . ξ (21) 1.3.2 An open problem about spike models Open Problem 1.3 (Spike Model for cut–SDP [MS15]. As since been solved [MS15]) Let W denote a symmetric Wigner matrix with i.i.d. entries Wij ∼ N (0, 1). Also, given B ∈ Rn×n sym- metric, define: Define q(ξ) as Q(B) = max {Tr(...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
:104) (cid:16) Since 1 E Tr 11T n These observ ξ 11T + √1 W n imply that 1 n (cid:17)(cid:105) ≈ ξ, by taking X = 11T we expect that q(ξ) ≥ ξ. ations ≤ ξ < 2 (see [MS15]). A reasonable conjecture is that it is equal to 1. This would imply that a certain semidefinite programming based algorithm for clustering under the S...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
graph. The number of connected components is simply the size of the smallest partition of the nodes into connected subgraphs. The Petersen graph is connected (and thus it has only 1 connected component). • A clique of a graph G is a subset S of its nodes such that the subgraph corresponding to it is complete. In other ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. r(G) := max {c(G), c (Gc)} . Given r, let R(r) denote the smallest integer n such that every graph G on n nodes must have r(G) ≥ r. Ramsey [Ram28] showed that R(r) is finite, for every r. Remark 2.1 It is easy to show that R(3) ≤ 6, try it! We will need a simple estimate for what follows (it is a very useful consequen...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
∈( r ) We will proceed by estimating E [X]. Note that, by linearity of expectation, E [X] = (cid:88) E [X(S)] , S∈(V r ) and E [X(S)] = Prob {S is a clique or independent set} = 2 (|S 2 2 ) | . This means that E [X] = By Proposition 2.2 we have, (cid:88) 2 r ) 2(|S| 2 ) ∈(V S = (cid:18)n r (cid:19) 2 2(r 2) = (cid:18)n...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
the definition of R(r) above, the following questions are open: • What is the value of R(5)? • What are the asymptotics of R(s)? In particular, improve on the base of the exponent on either the lower bound ( 2) or the upper bound (4). √ 27 • Construct a family of graphs G = (V, E) with increasing number of vertices for ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
1 . Then, with high | c 2 | R(G) ≤ 2 log2(n). Proof. Given n, we are interested in upper bounding Prob {R(G) ≥ (cid:100)2 log2 n(cid:101)}. and we proceed by union bounding (and making use of Proposition 2.2): Prob {R(G) ≥ (cid:100)2 log2 n(cid:101)} = Prob (cid:8)   (cid:91)   (cid:88) {S is a clique or independe...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
EH89]) Prove or disprove the following: For any finite graph H, there exists a constant δH > 0 such that any graph on n nodes that does not contain H as a subgraph (is a H-free graph) must have r(G) (cid:38) nδH . (cid:50) √ It is known that r(G) (cid:38) exp (cid:0)cH log n , for some constant cH > 0 (see [Chu13] for a...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
> 2 log2 n this clique is larger than any other clique that was in the graph before planting. This means that, if ω > 2 log2 n, there is enough information in the graph to find the planted clique. In fact, one can simply look at all subsets of size 2 log2 n + 1 and check wether it is clique: if it is a clique then it ve...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
whether the planted clique contains them. √ 29 2. Is there a polynomial time algorithm that is able to distinguish, with high probability, G from a draw of G (cid:0)n, 1 2 (cid:1) for ω (cid:28) √ n. For example, for ω ≈ √ n log n . 3. Is there a quasi-linear time algorithm able to find the largest clique of G (with hig...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
W. If we start a random walker at node i (X(0) = 1) then the probability that, at step t, is at node j is given by ij In other words, the probability cloud of the random walker at poin t t, given that it started at node i is given by the row vector Prob {X(t) = j|X(0) = i} = (cid:0)M t (cid:1) . Prob {X(t)|X(0) = i} = ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
�n]. 1 M = ΦΛΨT , and Φ, Ψ form a biorthogonal system in the sense that ΦT Ψ = In n or, equivalently, ϕT Note that ϕk and ψk are, respectively right and left eigenvectors of M , indeed, for all 1 ≤ k ≤ n: j ψk = δ × jk. Also, we can rewrite this decomposition as M ϕk = λkϕk and ψT k M = λkψT k . and it is easy to see t...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
). Then, λkϕk (imax) = M ϕk (imax) = n (cid:88) j=1 Mimax,jϕk (j) . This means, by triangular inequality that, that |λk| = n (cid:88) j=1 | M max,j i | k (j) | ϕ | ϕ (i | k max ≤ | ) n (cid:88) j=1 M | i max,j | = 1. (cid:50) Remark 2.8 It is possible that there are other eigenvalues with magnitude 1 but only if G is d...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
This motivates truncating the Diffusion Map by taking only the first d coefficients. Definition 2.10 (Truncated Diffusion Map) Given a graph G = (V, E, W ) and dimension d, con- struct M and its decomposition M = ΦΛΨT as described above. The Diffusion Map truncated to d dimensions is a map φt : V → Rd given by (d)φt (vi) =  ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
) j=1 1 deg(j) (cid:34) n (cid:88) k=1 λt k (ϕk(i1) − ϕk(i2)) ψk(j) (cid:35)2 n (cid:88) j=1 1 deg (j) (cid:34) n (cid:88) k=1 t λk (ϕk(i1) − ϕk(i2)) ψk(j) (cid:35)2 = = n (cid:34) n (cid:88) (cid:88) t λk (ϕk(i1) − ϕk(i2)) (cid:35) 2 ψk(j) deg(j) (cid:112) k=1 j=1 (cid:13) n (cid:13) (cid:13)(cid:88) (cid:13) (cid:13)...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
of examples The ring graph is a graph on n nodes {1, . . . , n} such that node k is connected to k − 1 and k + 1 and 1 is connected to n. Figure 2 has the Diffusion Map of it truncated to two dimensions Another simple graph is Kn, the complete graph on n nodes (where every pair of nodes share an edge), see Figure 3. 33 ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
goal is for the graph to capture the structure of the manifold. To each data point we will associate a node. For this we should only connect points that are close in the manifold and not points that maybe appear close in Euclidean space simply because of the curvature of the manifold. This is achieved by picking a smal...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
if the square or stripe moves to the right all the way to the end of the screen, it shows up on the left side (and same for up-down in the two-dimensional case). Not only this point cloud should have a one dimensional structure but it should also exhibit a circular structure. Remarkably, this structure is completely ap...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
, let’s say that there are two labels, {−1, +1}. Let’s say we are given the task of labeling point “?” in Figure 10 given the labeled points. The natural label to give to the unlabeled point would be 1. However, let’s say that we are given not just one unlabeled point, but many, as in Figure 11; then it starts being ap...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
to node i. We thus are interested in solving min f :V →R: f (i)=fi i=1,...,l (cid:88) i<j wij (f (i) − 2 f (j)) . If we denote by f the vector (in Rn with the function values) then we are can rewrite the problem as wij (f (i) − f (j)) = 2 (cid:88) i<j = = (cid:88) i<j (cid:88) i<j (cid:88) i<j wij [(ei − ej) f ] [(ei −...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
− (cid:90) f (x)f (cid:48)(cid:48)(x)dx. (cid:90) (cid:107)∇f (x)(cid:107)2 dx = (cid:90) d (cid:88) k=1 (cid:18) ∂ f ∂xk (cid:19)2 (x) dx = B. T. − (cid:90) f (x) d (cid:88) k=1 ∂2f ∂x2 k (x)dx = B. T. − (cid:90) f (x)∆f (x)dx, which helps motivate the use of the term graph Laplacian. Let us consider our problem min f...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
. − −1 Remark 2.13 The function f function constructed is called a harmonic extension. Indeed, it shares properties with harmonic functions in euclidean space such as the mean value property and maximum principles; if vi is an unlabeled point then f (cid:2) (i) = D−1 u (Wulfl + Wuufu)(cid:3) =i 1 (i) deg n (cid:88) j=1...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
Let’s say that we wan (cid:82) sphere, that minimizes centered at 0 with unit radius) B0(1) t to find a function in Rd that takes the value 1 at zero and −1 at the unit (cid:107)∇f (x)(cid:107)2dx. Let us consider the following function on B0(1) (the ball x fε( ) = | (cid:26) 1 − 2 |x −1 ε if |x| ≤ ε otherwise. A quick ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
not f T Lf but f T L2f instead, Figure 15 shows the outcome of the same experiment with the f T Lf replaced by f T L2f and con- firms our intuition that the discontinuity issue should disappear (see, e.g., [NSZ09] for more on this phenomenon). 2 41 Figure 11: In this example we are given many unlabeled points, the unlab...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
[Llo82] (also known as the k-means algorithm), is an iterative algorithm that alternates between • Given centers µ1, . . . , µk, assign each point xi to the cluster l = argminl=1,...,k (cid:107)xi − µl(cid:107) . • Update the centers µl = 1 |Sl| (cid:80) i∈S x i. l Unfortunately, Lloyd’s algorithm is not guaranteed to ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
always convex clusters. This means that k-means may have diffi- culty in finding cluster such as in Figure 17. 3.2 Spectral Clustering A natural way to try to overcome the issues of k-means depicted in Figure 17 is by using Diffusion Maps: Given the data points we construct a weighted graph G = (V, E, W ) using a kernel K(...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
(Spectral Clustering) Given a graph G = (V, E, W ) and a number of clusters k (and t), Spectral Clustering consists in taking a (k − 1) dimensional Diffusion Map (kφ −1) t (i) =    λt 2ϕ2(i) . .. λt kϕk(i)    and clustering the points φt tering. (k 1) − (1), φt (k 1) − (2), . . . , φt (k 1) − (n) ∈ Rk−1 using, for...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
, W ), a natural measure to measure a vertex partition (S, Sc) is cut(S) = (cid:88) (cid:88) i∈S j∈Sc wij. Note however that the minimum cut is achieved for S = ∅ (since cut(∅) = 0) which is a rather meaningless choice of partition. Remark 3.3 One way to circumvent this issue is to ask that |S| = |Sc| (let’s say that t...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
�V A similar object is the Normalized Cut, Ncut, which is given by Ncut(S) = cut(S) vol(S) + cut(Sc) vol(Sc) . Note that Ncut(S) and h(S) are tightly related, in fact it is easy to see that: h(S) ≤ Ncut(S) ≤ 2h(S). 13W is the matrix of weights and D the degree matrix, a diagonal matrix with diagonal entries Dii = deg(i...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
1 = pT pT Proposition 3.5 Given a graph G = (V, E, W ) and a partition (S, Sc) of V , Ncut(S) corresponds to the probability, in the random walk associated with G, that a random walker in the stationary distribution goes to Sc conditioned on being in S plus the probability of going to S condition on being in Sc, more e...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
balanced partition. Recall that balanced partition can be written as 1 min 4 y∈{−1,1}n 1T y=0 yT LGy. An intuitive way to relax the balanced condition is to allow the labels y to take values in two different real values a and b (say yi = a if i ∈ S and yj = b if i ∈/ S) but not necessarily ±1. We can then use the notion...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
=    (cid:16) vol(Sc) vol(S) vol(G) (cid:16) − vol(S) vol(Sc) vol(G) (cid:17) 1 2 (cid:17) 1 2 if i ∈ S if i ∈ Sc. 49 wij(y i 2 − yj) w 2 ij(yi − yj) Proof. yT LGy = = = = = = 1 (cid:88) 2 i,j (cid:88) (cid:88) i∈S j∈Sc (cid:88) (cid:88) i∈S j S ∈ c (cid:88) (cid:88) i∈S j ∈S c (cid:88) (cid:88) i∈S j∈S c (cid:88)...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
is, in general, NP-hard, we consider a similar problem where the constraint that y can only take two values is removed: min yT LGy s. t. y ∈ Rn yT Dy = 1 yT D1 = 0. (27) Given a solution of (27) we can round it to a partition by setting a threshold τ and taking S = {i ∈ V : yi ≤ τ }. We will see below that (27) is an e...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
relaxation (27) is obtained from (26) by removing a constraint we immediately have that This means that λ2 (LG) ≤ min Ncut(S). S⊂V 1 2 λ2 (LG) ≤ hG. In what follows we will show a guarantee for Algorithm 3.2. Lemma 3.8 There is a threshold τ producing a partition S such that This implies in particular that h(S) ≤ (cid:...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
y = ϕ2 satisfies the conditions and gives δ = λ2 (LG) this proves the Lemma. We will pick this threshold at random and use the probabilistic method to show that at least one of the thresholds works. First we can, without loss of generality, assume that y1 ≤ · ≤ yn (we can simply relabel the vertices). Also, note that sc...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
2 ∈V i∈V j (cid:88) (cid:88) i∈V j ∈V w ij Prob{(S, Sc) cuts the edge (i, j)} Note that Prob{(S, Sc) cuts the edge (i, j)} is (cid:12) (cid:12)x2 2 (cid:12) i and xj have the same sign and xi +xj otherwise. Both cases can be conveniently upper bounded by |xi − xj| (|xi| + |xj|). This means that (cid:12) (cid:12) (cid:1...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
{xi is in the smallest set (in terms of volume)}, to break ties, if vol(S) = vol(S i=1 c) we take the “smallest” set to be the one with the first indices. Note that m is always in the largest set. Any vertex j < m is in the smallest set if xj ≤ τ ≤ xm = 0 and any j > m is in the smallest set if 0 = xm ≤ τ ≤ xj. This mea...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
cases where: hG ≤ φ or hG ≥ 2 φ. Can this be improved? √ Open Problem 3.1 Does there exists a constant c > 0 such that it is N P -hard to, given φ, and G distinguis between the cases 1. hG ≤ φ, and √ 2. hG ≥ c φ? It turns out that this is a consequence [RST12] of an important conjecture in Theoretical Computer Science ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
14 Ayt (cid:0) (cid:1) One drawback of the power method is that when using it, one cannot be sure, a posteriori, that there is no eigenvalue of A much larger than what we have found, since it could happen that all our guesses were orthogonal to the corresponding eigenvector. It simply guarantees us that if such an eige...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
necessarily forming a partition). Another natural definition is ϕG(k) = min S:vol S≤ 1 k vol(G) cut(S) vol(S) . ϕG(k) ≤ ρG(k). It is easy to see that The following is known. Theorem 3.11 ([LGT12]) Let G = (V, E, W ) be a graph and k a positive integer Also, ρG(k) ≤ O (cid:0)k2(cid:1) (cid:112) λk, ρG(k) ≤ O (cid:16)(cid...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
many more concentration inequalities. Chebyshev’s inequality is a simple inequality that control fluctuations from the mean. Theorem 4.2 (Chebyshev’s inequality) Let X be a random variable with E[X 2] < ∞. Then, Prob{|X − EX| > t} ≤ Var(X) t2 . Proof. Apply Markov’s inequality to the random variable (X − E[X])2 to get: ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
� The inequality implies that fluctuations larger than O ( 2n log n we get that the probability is at most 2 .n for t = a (cid:80) n Proof. We first get a probability bound for the event i=1 Xi > t. The proof, again, will follow from Markov. Since we want an exponentially small probability, we use a classical trick that ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
(λa)2/2. Prob (cid:40) n (cid:88) i =1 (cid:41) Xi > t ≤ e−tλ n (cid:89) i=1 2 e(λa) /2 15This follows immediately from the Taylor expansions: cosh(x) = (cid:80)∞ n=0 (2n)! , ex2/2 = (cid:80)∞ n=0 x2n 2n n! , and (2n)! ≥ 2nn!. = e−tλen(λa)2/2 nx 2 57 This inequality holds for any choice of λ ≥ 0, so we choose the value...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
�� 1 with probability p/2 0 with probability 1 − p 1 with probability p/2.  Then, E(ri) = 0 and |ri| ≤ 1 so Hoeffding’s inequality gives: Prob (cid:40)(cid:12) n (cid:12) (cid:12)(cid:88) (cid:12) (cid:12) i=1 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) ri (cid:41) > t ≤ 2 exp 2 (cid:18) t − 2n (cid:19) . Intuitively,...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
:12) (cid:12) Xi > t (cid:12) (cid:12) (cid:12) i=1 ≤ 2 exp − t2 2nσ2 + 2 3 at . Remark 4.6 Before proving Bernstein’s Inequality, note that on the example of Remark 4.4 we get Prob (cid:40)(cid:12) (cid:12) (cid:12) (cid:12) (cid:12) n (cid:88) i=1 (cid:12) (cid:12) (cid:12) (cid:12) (cid:12) ri (cid:41) (cid:32) > t ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
:16) λa − 1 − λa e (cid:17) ≤ 1 + = 1 + = 1 + σ2 a2 Therefore, Prob (cid:40) n (cid:88) i=1 (cid:41) Xi > t ≤ e−λt (cid:20) 1 + (cid:16) σ2 a2 eλa − 1 − λa (cid:17)(cid:21)n 59 We will use a few simple inequalities (that can be easily proved with calculus) such as16 1 + x ≤ ex, for all x ∈ R. This means that, which rea...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
(cid:88) Prob (cid:41) (cid:18) Xi > t ≤ exp − 2 (cid:19) nσ a2 {(1 + u) log(1 + u) − u} i=1 The rest of the proof follo ws by noting that, for every u > 0, which implies: (1 + u) log(1 + u) − u ≥ u 2 + 2 3 u , Prob (cid:40) n (cid:88) i=1 (cid:41) (cid:32) Xi > t ≤ exp − (cid:32) = exp − 16In fact y = 1 + x is a tange...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
inside the exponent): Prob {|F (X) − EF (X)| ≥ t} ≤ 2 exp . This exposition follows closely σ2 the proof of Theorem 2.1.12 in [Tao12] and the original argument is due to Maurey and Pisier. For a proof with the optimal constants see, for example, Theorem 3.25 in these notes [vH14]. We will also assume the function F is ...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf
= E [exp (λF (X))] E [exp (−λF (Y ))] ≥ E [exp (λF (X))] Now we use the Fundamental Theorem of Calculus in a circular arc from X to Y : F (X) − F (Y ) = π (cid:90) 2 0 ∂ ∂θ F (Y cos θ + X sin θ) dθ. 61 The advantage of using the circular arc is that, for any θ, Xθ := Y cos θ + X sin θ is another random (cid:48) = −Y si...
https://ocw.mit.edu/courses/18-s096-topics-in-mathematics-of-data-science-fall-2015/5f0f7205d1cf274e80d77345a7edbf2a_MIT18_S096F15_TenLec.pdf