text
stringlengths
30
4k
source
stringlengths
60
201
Let F = {f : X �→ R} and Cf = {(x, t) ∈ X × R : 0 ≤ t ≤ f (x) or f (x) ≤ t ≤ 0}. Define class of sets C = {Cf : f ∈ F}. Definition 12.1. If C is a VC class of sets, then F is VC-subgraph class of functions and, by definition, V C(F) = V C(C). Note that equivalent definition of Cf is � = {(x, t) ∈ X × R : |f (x)| ≥ |t...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
n i=1 Definition 12.2. Given ε > 0 and f1, . . . , fN ∈ F, we say that f1, . . . , fN are ε-separated if d(fi, fj ) > ε for any i =� j. Definition 12.3. The ε-packing number, D(F, ε, d), is the maximal cardinality of an ε-separated set. Note that D(F, ε, d) is decreasing in ε. Definition 12.4. Given ε > 0 and f1, . . . ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, f � N . Since D > N , there exist fi and fj such that for some f � k d(fi, f � ) ≤ ε and d(fj , f � ) ≤ ε. k k Therefore, by triangle inequality, d(fi, fj ) ≤ 2ε, which is a contradiction. To prove the second inequality, assume f1, . . . , fD is an optimal packing. For any f ∈ F, f1, . . . , fD, f would also be ε-p...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
balls is Vol � ε � 2 = Cd � ε �d 2 DCd � ε �d 2 . 28 1+ε/21 Lecture 12 VC subgraph classes of functions. Packing and covering numbers. 18.465 Therefore, D ≤ � �d 2 + ε ε . Definition 12.6. log N (F, ε, d) is called metric entropy. For example, log N (B1(0), ε, d) ≤ d log 3 .ε 29 Lecture 13 Covering n...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Cf� be subgraphs of fr and f�. Then P (Cfr and Cf� pick out different subsets of (z1, t1), . . . , (zk, tk)) = P (At least one point (zi, ti) is picked by Cfr or Cf� but not picked by the other) = 1 − P (All points (zi, ti) are picked either by both or by none) = 1 − P ((zi, ti) is picked either by both or by none)k...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
of (z1, t1), . . . , (zk, tk)) ≥ 1 − e−kε/2 . � � m 2 What k should we choose so that 1 − m e−kε/2 > 0? Choose � � m . 2 k > 2 ε log � � 2 Then there exist (z1, t1), . . . , (zk, tk) such that all Cf� pick out different subsets. But {Cf : f ∈ F} is VC, so by Sauer’s lemma, we can pick out at most ek V k...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
7 ε ε . Indeed, one can show that since Hence, m1/V = s ≤ s� and, thus, 49 8eε > log 7 . � D(F, ε, d) ≤ � 8e ε log �V 7 ε . 31 Lecture 14 Kolmogorov’s chaining method. Dudley’s entropy integral. 18.465 For f ∈ F ⊆ [−1, 1]n, define R(f ) = 1 �n εifi. Let d(f, g) := � 1 �n (fi − gi)2 �1/2 . n i=1 n i...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1) for all g ∈ Fj+1 • Repeat until you cannot find such f Define projection πj : F �→ Fj as follows: for f ∈ F find g ∈ Fj with d(f, g) ≤ 2−j and set πj (f ) = g. For any f ∈ F , Moreover, Define the links f = π0(f ) + (π1(f ) − π0(f )) + (π2(f ) − π1(f )) . . . ∞ � (πj (f ) − πj−1(f )) = j=1 d(πj−1(f ), πj (f ))...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � � = exp − ≤ exp − Note that cardLj−1,j ≤ cardFj−1 · cardFj ≤ (cardFj )2 . � P ∀� ∈ Lj−1,j , R(�) = 1 n n � � εi�i ≤ t ≥ 1 − (cardFj )2 e− 2 nt 2 2−2j+5 · i=1 = 1 − 1 (cardFj )2 e−u after changing the variable such that � 2−2j+5 n t = (4 log(cardFj ) + u) ≤ � 2−2j+5 n 4 log(cardFj ) + � 2...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≥ 1 − e−u Recall that R(f ) = ∞ R(πj (f ) − πj−1(f )). If f is close to 0, −2k+1 < d(0, f ) ≤ 2−k . Find such a k. � j=1 Then π0(f ) = . . . = πk(f ) = 0 and so 33 Lecture 14 Kolmogorov’s chaining method. Dudley’s entropy integral. 18.465 ∞ � j=k+1 ∞ � R(f ) = ≤ ≤ j=k+1 ∞ � � 27/2 √ n R(πj (f ) − πj−1(f ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
< d(0, f ). � 34 log D1/2−(κ+2)22−(κ+1) Lecture 15 More symmetrization. Generalized VC inequality. 18.465 Lemma 15.1. Let ξ, ν - random variables. Assume that P (ν ≥ t) ≤ Γe−γt where Γ ≥ 1, t ≥ 0, and γ > 0. Furthermore, for all a > 0 assume that Eφ(ξ) ≤ Eφ(ν) where φ(x) = (x − a)+. Then P (ξ ≥ t) ≤ Γ e e−γt...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 = Γ · e · e−γt where we chose optimal a = t − 1 γ to minimize Γe−γa γ . � 35 (x-a)+a Lecture 15 More symmetrization. Generalized VC inequality. 18.465 Lemma 15.2. Let x = (x1, . . . , xn), x� = (x1 � , . . . , xn � ). If for functions ϕ1(x, x�), ϕ2(x, x�), ϕ3(x, x�) then � � P ϕ1(x, x�) ≥ ϕ2(x, x�) + ϕ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�2 + Ex� ϕ3t} = {sup(Ex� ϕ1 − Ex� ϕ2 − δEx� ϕ3)4δ ≥ t}. δ>0 � � �� ξ By assumption, P (ν ≥ t) ≤ Γe−γt . We want to prove P (ξ ≥ t) ≤ Γ e e−γt . By the previous lemma, we only need to check whether Eφ(ξ) ≤ Eφ(ν). · · ξ = sup Ex� (ϕ1 − ϕ2 − δϕ3)4δ δ>0 ≤ Ex� sup(ϕ1 − ϕ2 − δϕ3)4δ δ>0 = Ex� ν φ(ξ) ≤ φ(Ex� ν) ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− f (x� )) − (g(xi) − g(x� ))))2 i i . �1/2 n 1 � n i=1 In Lecture 14, we proved � Pε ∀f ∈ F, n 1 � n i=1 εi(f (xi) − f (x� i)) ≤ √ 29/2 � d(0,f ) n 0 log1/2 D(F, ε, d)dε Complement of the above is � � Pε ∃f ∈ F, εi(f (xi) − f (xi � )) ≥ √ Taking expectation with respect to x, x�, we get P ∃f ∈ F, εi(f (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� are i.i.d and we can switch xi and x� on x, x�, but it’s invariant to the permutations. i. We can remove i. To the right of ”≥” sign, only distance d(f, g) depends By Lemma 15.2 (minus technical detail ”∃f ”), � P ∃f ∈ F, Ex� n 1 � n i=1 (f (xi) − f (x� )) ≥ Ex� √ i log1/2 D(F, ε, d)dε � d(0,f ) 29/2 n 0 �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− f (x� ) − g(xi) + g(x� ))2 i i �1/2 Definition 16.1. We say that F satisfies uniform entropy condition if ∀n, ∀(x1, . . . , xn), D(F, ε, dx) ≤ D(F, ε) where dx(f, g) = � � n 1 i=1 n (f (xi) − g(xi))2 �1/2 Lemma 16.1. If F satisfies uniform entropy condition, then � d(0,f ) Ex� 0 log1/2 D(F, ε, d)dε ≤ E x� d(0...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(F, ε/2, dx,x� ). and, hence, ε ≤ d(fi, fj ) ≤ 2dx,x� (fi, fj ) ε/2 ≤ dx,x� (fi, fj ). So, f1, ..., fN is ε/2-packing w.r.t. dx,x� . Therefore, can pack at least N and so D(F, ε, d) ≤ D(F, ε/2, dx,x� ). 38 Lecture 16 Consequences of the generalized VC inequality. 18.465 � d(0,f ) Ex� 0 log1/2 D(F, ε, d)dε ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � f 2(xi) − 2f (xi)Ef + Ef 2 � = ≤ 1 n i=1 n 1 � n i=1 (f 2(xi) + Ef 2) ≤ f (xi) + Ef n 1 � n � i=1 ≤ 2 max Ef, f (xi) � n1 � n i=1 Theorem 16.1. If F satisfies Uniform Entropy Condition and F = {f : X → [0, 1]}. Then � √ � � 2Ef � P ∀f ∈ F, Ef − log1/2 D(F, ε/2)dε + 27/2 ≥ 1 − e−t . n 1 �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
log1/2 D(F, ε/2)dε + 27/2 2( n � 1 �n ⎞ i=1 f (xi))t n ⎠ Example 16.1. [VC-type entropy condition] log D(F, ε) ≤ α log 2 . ε ≥ 1 − e−t . For VC-subgraph classes, entropy condition is satisfied. Indeed, in Lecture 13, we proved that D(F, ε, d) ≤ � 1 for a VC-subgraph class F with V C(F) = V , where d(f, g) = d...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
⎧ ⎨ ⎩ 2x log1/2 1 x 2x 1 , x ≤ e , x ≥ 1 e . Proof. First, check the inequality for x ≤ 1/e. Taking derivatives, � log 1 x � ≤ 2 log 1 x x + � 1 log x � � 1 − x 40 xxlog 1/x1/21/ε1/2log Lecture 16 Consequences of the generalized VC inequality. 18.465 − 1 log 1 x ≤ 2 log 1 x 1 x x ≤ ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− e−t . � 41 Lecture 17 Covering numbers of the convex hull. 18.465 Consider the classification setting, i.e. Y = {−1, +1}. Denote the set of weak classifiers H = {h : X �→ [−1, +1]} and assume H is a VC-subgraph. Hence, D(H, ε, dx) ≤ K V log 2/ε. A voting algorithm outputs · T � λihi, where hi ∈ H, T � λi ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
If f = �d λihi, for all hi we can find hki such that d(hi, hki ) ≤ ε. Let f � = �d λihki . Then i=1 i=1 � � d � d(f, f �) = �f − f ��x = � � � i=1 � � λi(hi − hki )� � � x ≤ d � λi�hi − hki �x ≤ ε. i=1 Define FD,d = � d � λihi, hi ∈ {h1, . . . , hD}, d � λi ≤ 1, λi ≥ 0 � . Hence, we can approximate a...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= E n k ⎛ n � � 1 ⎝ k j=1 i=1 ⎛ ⎝ 1 k k � j=1 ⎞ 2 Yj (xi) − f (xi)⎠ ⎞ 2 (Yj (xi) − EYj (xi)) ⎠ 1 k2 k � E(Yj (xi) − EYj (xi))2 j=1 n � E i=1 n � i=1 = = ≤ 1 n 1 n 4 k because |Yj (xi) − EYj (xi)| ≤ 2. Choose k = 4/ε2 . Then � � 1 � E � k � � k � j=1 2 � � � Yj − f � � � x ⎛ 1 = Edx...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
ways to choose k functions out of �d i=1 D,d. h1, . . . , hd, assume each of hi is chosen kd times such that k = k1 + . . . + kd. We can formulate the problem as finding the number of strings of the form 0 . . . � 00 � �� k1 1 00 0 . . . � � �� k2 1 . . . 1 00 0 . . . � � �� kd . 43 Lecture 17 Coveri...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�� ∈ FD,d by f � ∈ F � within ε. Hence, we can approximate any f ∈ Fd by f � ∈ F � within 2ε. Moreover, D,d D,d log N (Fd = convd H, 2ε, dx) ≤ d log � e2D(k + d) d2 = d 2 + log D + log � ≤ d 2 + KV log 2 ε ≤ KV d log 2 ε since k+d ≤ 1 + k and d ≥ 1, V ≥ 1. d2 � k + d d2 � + log 1 + �� 4 ε2 � 44 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
V +2 . Proof. Let N (F, ��F �2, � · �2) ≤ C � 1 = C 1/V �F �2 · n−1/V . Let L = C 1/V �F �2. Then N (F, Ln−1/V , � · �2) ≤ n (i.e., the L n−1/V -net of F contains at most n elements). Construct F1 ⊂ F2 ⊂ · · · ⊂ Fn ⊂ · · · such that each Fn is a L · n−1/V -net, and contains at most n elements. 1 . We proceed to sho...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
C � V +2 . Inequality 18.1 will proved in two and log N (convF, ��F �2, � · �2) ≤ K � � 2 1 · � ∞ V· · 2·V 2 steps: (1) (18.2) log N (convFn, C1L · n−W , � · �2) ≤ D1 · n by induction on n, using Kolmogorov’s chaining technique, and (2) for fixed n, (18.3) log N (convFn·kq , CkL · n−W , � · �2) ≤ Dk · n b...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
number of elements |Gn| ≤ |Fn| ≤ n, where Gn = {f − πmf : f ∈ Fn}. We will find 1 C1Ln− W -nets for both Fm and Gn, and bound the number of elements for them to finish to induction step. We need the following lemma to bound the number of elements for the 1 C1Ln− W -net of Gn. 2 1 1 2 Lemma 18.2. Let (X , A, µ) be a ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 k (diamF)2 . Thus at least one realization of 1 �k Yi has a distance at most k−1/2diamF to � of k =1 fjk , there are at most � n+ 1 � k =1 Yi has the form k i k 1 � k i of such forms. Thus i=1 λifi. Since all realizations N (k−1/2diamF, convF, � · �2) ≤ ≤ (k + n)k+n kknn = k−1� k � � k − 1 n + k k...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� · �2) ≤ D1 · m. In other words, the C1L m−W -net of convFm contains at most eD1m elements. This defines a partition of convFm into at most eD1m elements. Each element is isometric to a subset of a ball of radius C1Lm−W . Thus each 2 C1Ln−W according to the 6dW �n/d sets of diameter at most 1 set can be partitioned ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
net, we set � 2L (n (k − 1)q)−1/V = Lk−2n−W , get � = 1 n−1/2 (k − 1)q/V k−2 , 46 · 2 Lecture 18 Uniform entropy condition of VC-hull classes . 18.465 and N (convGn,k, �diamGn,k, � · �2) ≤ � e + enkq�2�2/�2 � e + N (convGn,k, �diamGn,k, � · �2) ≤ ⇒ k−4+q+2q/V �8·n·k4(k−1)−2q/V e 4 . As a result, we get ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Boost are all maximal margin classifiers. Maximizing margin means, penalizing small margin, controling the complexity of all possible outputs of the algorithm, or controling the generalization error. We can define φδ (s) as in the following plot, and control the error P(y f (x) ≤ 0) in terms of Eφδ(y f (x)): · · P(y f (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
g(x)) ,definition of dx, and the packing numbers for φδ(yF) and F satisfies inequality D(φδ(yF), �, dz) ≤ D(F, � δ, dx). Recall that for a VC-subgraph class H, the packing number satisfies D(H, �, dx) ≤ C( 1 )V , where C is · � a constant, and V is a constant. For its corresponding VC-hull class, there exists K(C, V ),...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�V . � Since conv(H) satisfies the uniform entroy condition (Lecture 16) and f ∈ [−1, 1]X , with a probability . D Haussler (1995) also proved the following two inequalities related to the packing , and D(H, �, dx) ≤ K � 1 �V � � of at least 1 − e−u , Eφδ(y · f (x)) − Enφδ (y · f (x)) ≤ √ Eφδ K � √ n 0 � � � ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
u ) n for some constant K. We proceed to bound Eφδ for δ ∈ {δk = exp(−k) : k ∈ N}. Let exp(−uk) = � �2 1 k+1 e−u , it follows that uk = u + 2 � k∈N exp(−uk) = 1 − � 1 − � k∈N k+1 1 �2 1 log(log δ · k e−u = 1 − π2 6 · e−u < 1 − 2 e−u , · · log(k + 1) = u + 2 + 1). Thus with a probability of at leas...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
i=1 φδ(yi · f (xi)) ≤ n � n · · · 1 · P(y · f (x)) ≤ 0) ≤ K · inf δ Pn(y · f (x) ≤ δ) + n− 2(V +1) δ− V +1 + V V +2 � 2 log(log 1 + 1) n δ � . u n + 49 Lecture 20 Bounds on the generalization error of voting classifiers. 18.465 As in the previous lecture, let H = {h : X �→ [−1, 1]} be a VC-subgra...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
step-up and step-down functions on the [0, 1] interval, parametrized by a and b. Then V C(H) = 2. Let F = conv H. First, rescale the functions: f = �T λihi = 2 �T λi − 1 = 2f � −1 where f � = �T λih� i, h� = hi+1 . We can generate any non-decreasing function f � such that f �(0) = 0 and f �(1) = 1. Similarly, we c...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
n 1 � n i=1 n 1 � n i=1 � ϕδ (yif (xi)) + Eϕδ (yf (x)) − 1 n � I(yif (xi) ≤ δ) + Eϕδ (yf (x)) − n � i=1 � � ϕδ (yif (xi)) 1 n n � i=1 ϕδ (yif (xi)) By going from 1 n � n i=1 I(yif (xi) ≤ 0) to 1 n � n i=1 I(yif (xi) ≤ δ), we are penalizing small confidence predic­ tions. The margin yf (x) is a measur...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
yn). Since |ϕδ (s) − ϕδ (t) | ≤ 1 δ |s − t| , 51 sϕ (s)δδ1 Lecture 20 Bounds on the generalization error of voting classifiers. 18.465 we have dx,y (ϕδ (yf (x)) , ϕδ (yg(x))) = ≤ � � (ϕδ (yif (xi)) − ϕδ (yig(xi)))2 �1/2 (yif (xi) − yig(xi))2 �1/2 n1 � n i=1 n 1 1 � δ2 n i=1 � = 1 δ n 1 � n i=1 �1/2 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, . . . , ϕδ (yfD(x)) is an ε-cover of ϕδ (yFd(x)). dx,y(ϕδ (yf (x)) , ϕδ (yfi(x))) ≤ ε � 52 Lecture 21 Bounds on the generalization error of voting classifiers. 18.465 We continue to prove the lemma from Lecture 20: Lemma 21.1. Let Fd = convd H = { λihi, hi ∈ H} and fix δ ∈ (0, 1]. Then �d i=1 � ϕ¯δ Eϕ − ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
log 1 x dx dV � √ k ≤ √ n 2 δ dV 2 δ � 2 Eϕδ log √ 2 Eϕδ δ where we have made a change of variables 2 = x, ε = 2x . Without loss of generality, assume Eϕδ ≥ 1/n. εδ E Otherwise, we’re doing better than in Lemma: √ E ≤ n ⇒ E ≤ log n � δ log n Eϕδ k � √ √ n 0 log1/2 D(ϕδ (yFd(x)) , ε)dε ≤ K � � n...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
−t 6 d2 δ 2 . Then π � P ∀f ∈ Fd, . . . + tδ,d ≥ 1 − e−tδ,d = 1 − e−t 6δ d2π2 n and ⎛ � P ⎝ d,δ ∀f ∈ Fd, . . . + ≥ 1 − � δ,d e−t 6δ d2π2 = 1 − e−t . 53 � � � �⎞ ⎠ tδ,d n Lecture 21 Bounds on the generalization error of voting classifiers. 18.465 Since tδ,d = t + log d2π2 6δ , ∀f ∈ Fd, E...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− 1 √n � n i Eϕδ =1 ϕδ ≤ ε, � Eϕδ − ε Eϕδ − n 1 � n i=1 ϕδ ≤ 0. � � �� ε � ε �2 Eϕδ ≤ + � 2 2 + n 1 � n i=1 ϕδ Eϕδ ≤ 2 � ε �2 2 n 1 � + 2 n ϕδ. i=1 The bound becomes ⎛ ⎜ n ⎜ 1 � P (yf (x) ≤ 0) ≤ K ⎜ ⎝ n i=1 ⎞ I(yif (xi) ≤ δ) + ⎟ n t dV ⎟ log + ⎟ n n ⎠ δ � �� � (*) where K is a ro...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
we used the notation Pn (C) = 1 n � n I(xi ∈ C). i=1 Remark: P (yf (x) ≤ 0) ≤ ⎛ ⎜ inf K ⎜ Pn � ⎝ δ∈(0,1) (yf (x) ≤ δ) + � �� inc. with δ � V min(T, (log n)/δ2) log n n �� dec. with δ ⎞ t ⎟ δ + ⎟ ⎠ . n � Proof. Let f = � T =1 λihi, g = k i 1 � k =1 Yj , where j P (Yj = hi) = λi and P (Yj = 0) ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
2 + � δ 2 � EYj � � ≤ ⎞ 1 ⎠ 2 k � � ≥ Yj j=1 ⎛ k 1 � ≤ ExPY ⎝ k j=1 Yj � ≥ EY1 � + � δ � 2 � EYj � ≤ ⎞ 1 ⎠ 2 ≤ (by Hoeffding’s ineq.) Exe−kD(EY1 δ �+ 2 ,EY1 �) because D(p, q) ≥ 2(p − q)2 (KL-divergence for binomial variables, Homework 1) and, hence, ≤ Exe−kδ2/2 = e−kδ2/2 � D EY1 � + δ , EY1 2 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 − e−t, for all k, δ and any g ∈ Fk = conv k(H), � Φ Eϕδ, � ϕδ = n 1 � n i=1 � n i=1 ϕδ (yig(xi)) Eϕδ (yg(x)) − � n 1 �� Eϕδ (yg(x)) � � t n + V k log n δ n ≤ K = ε/2. y is increasing with x and decreasing with y. Note that Φ(x, y) = x√− x By inequalities (22.1) and (22.2), and Eϕδ (yg(x)) ≥ P (y...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� �2 ε 2 �2 + y So, P (yf (x) ≤ 0) − � ε 2 1 n ≤ �� �2 ε 2 + + Pn (yf (x) ≤ 3δ) + �2 . 1 n � 57 Lecture 23 Bounds in terms of sparsity. 18.465 Let f = �T λihi, where λ1 ≥ λ2 ≥ . . . ≥ λT ≥ 0. Rewrite f as i=1 d � λihi + T � f = λihi = d � λihi + γ(d) T � λ� ihi i=1 i=d+1 i=1 i=d+1 wher...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
PY yf (x) ≤ 0, yg(x) ≥ δ � (x, y) ≤ PY yg(x) − yf (x) > δ � (x, y) � � � ⎛ ⎛ k1 � = PY ⎝γ(d)y ⎝ k j=1 ⎞ Yj (x) − EY1⎠ ≥ δ � (x, y)⎠ . ⎞ � By renaming Yj 2 � = yYj +1 ∈ [0, 1] and applying Hoeffding’s inequality, we get ⎛ ⎛ 1 PY ⎝γ(d)y ⎝ k Hence, ⎞ ⎛ � 1 Yj (x) − EY ⎠ ≥ δ � (x, y)⎠ = PY ⎝ k ⎞ k � j=1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
e(f, δ) = min 0≤d≤T d + Recall from the previous lectures that � log n . 2 2γ δ (d) 2 Pn (yg(x) ≤ 2δ) ≤ Pn (yf (x) ≤ 3δ) + 1 . n Hence, we have the following margin-sparsity bound Theorem 23.1. For λ1 ≥ . . . λT ≥ 0, we define γ(d, f ) = �T i=d+1 λi. Then with probability at least 1 − e−t , P (yf (x) ≤ 0) ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
∞ i=d+1 i=d+1 d x−αdx = K 1 (α − 1)dα−1 = Kα dα−1 Then � e(f, δ) = min d + log n 2γ2(d) δ2 � Kα δ2d2(α−1) � � log n � ≤ min d + d d Taking derivative with respect to d and setting it to zero, we get 1 − Kα log n δ2d2α−1 = 0 d = Kα · log1/(2α−1) n δ2/(2α−1) ≤ K log n . δ2/(2α−1) 59 Lec...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
in the convex hull convH takes the form · · ·  f = �T i=1 λihi = � c∈{C1,··· ,Cm} αc � h∈c λc i=1 h · h, where T � m, � � for h ∈ c. We can approximate f by g as follows. For each cluster c, let {Yk such that ∀h ∈ c ⊂ H, we have P(Yk h∈c λc � 2 = var(Zk) = k=1 Yk g = c − EYk k=1 Zk. Then EZk = Eg = f . We d...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
ifier takes the form y = sign(f (x)) and a classification error corresponds to yf (x) < 0. We can bound the error by (24.1) P(yf (x) < 0) ≤ P(yg ≤ δ) + P(σc 2 > r) + P(yg > δ|yf (x) ≤ 0, σc 2 < r). The third term on the right side of inequality 24.1 can be bounded in the following way, P(yg > δ|yf (x) ≤ 0, σ2 < r)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
,YN P(yg ≤ δ) ≤ EY1, ··· ,YN Eφδ(yg) 61 hC. . .mC1C2...... Lecture 24 Bounds in terms of sparsity (example). 18.465 Any realization of g = �Nm Zk, where Nm depends on the number of clusters (C1, combination of h ∈ H, and g ∈ convNm H. According to lemma 20.2, k=1 · · · , Cm), is a linear (Eφδ (yg) − Enφδ (y...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
24.1, we approximate σc (1) σ2 N = N where Zk and Zk are independent copies of Zk . We have k=1 2 Zk − Zk 1 �N (2) �2 1 � (1) (2) 2 by E (1,2) σ2 = σc 2 N Y1, ,N ··· 1 � (1) Zk − Zk (2) �2 varY (1,2) 1, ··· ,N 2 = ≤ (2) �2 � 1 (1) var Zk − Zk 4 1 � E Zk − Zk 4 � (1) (2) �4 −1 ≤ Zk (1) , Zk...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
,N � σ2 N ≥ 3r � + ··· 1 n with appropriate choice of N , following the same line of reasoning as in inequality 24.1. We note that PY1,··· ,YN (σ2 N ≥ 3r) ≤ EY1,··· ,YN φr(σ2 N ≥ 2r) for some φδ. N ), and Enφδ(σ2 N ) ≤ Pn(σ2 Since σ2 N ∈ { 1 2N � N � � � (2) � (1) αc hk,c − hk,c �2 k=1 c (1) (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
+ N m,δ,r � 1 n + V · Nm log n n δ + � . u n As a result, with probability at least 1 − Ke−u, we have P(yf (x) ≤ 0) ≤ for all f ∈ convH. � Pn K · inf r,δ,m (yg ≤ 2 δ) + Pn(σ2 ≥ r) + · N V · min(rm/δ2, Nm) n log n δ log n + � u n 63 4δsϕ (s)δ12δδ3δ Lecture 25 Martingale-difference inequali...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, . . . , x� )n � = Z1 + Z2 + . . . + Zn −1, xi, . . . , xn) − Ex� Z(x� 1, . . . , xi � , xi+1, . . . , xn). where Assume Zi = Ex� Z(x� � 1, . . . , xi (1) |Zi| ≤ ci (2) EXi Zi = 0 (3) Zi = Zi(xi, . . . , xn) Lemma 25.1. For any λ ∈ R, Proof. Take any −1 ≤ s ≤ 1. With respect to λ, function eλs is convex and ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
ale-difference inequalities. 18.465 Theorem 25.1. If condition (25.1) is satisfied, P (Z − EZ > t) ≤ e − 2 P 2 t n i=1 c 2 i . Proof. For any λ > 0 Furthermore, � P (Z − EZ > t) = P e λ(Z−EZ) > eλt ≤ � Eeλ(Z−EZ) . eλt Ee λ(Z−EZ) = Ee λ(Z1+...+Zn) = EEx1 e λ(Z1+...+Zn) � � = E e λ(Z2+...+Zn)Ex1 e λZ1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
F 1 n n � f (xi) i=1 � � � . � � Z(x1, . . . , x� | | i, . . . , xn) − Z(x1, . . . , xi, . . . , xn) = � � � � sup Ef − � � � � � f 1 n � � − sup Ef − � � f 1 n (f (x1) + . . . + f (x� � � i) + . . . + f (xn)) � � � � � � � (f (x1) + . . . + f (xi) + . . . + f (xn)) � � � � ≤ sup f ∈F because 1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
be i.i.d. such that P (ε = ±1) = 1 . Define 2 � � � Z((ε1, x1), . . . , (εn, xn)) = sup � � f ∈F 1 n Then, for any i, n � i=1 � � � εif (xi) . � � Z((ε1, x1), . . . , (ε� | i, x� | i), . . . , (εn, xn)) − Z((ε1, x1), . . . , (εi, xi), . . . , (εn, xn)) � 1 � ≤ sup � � f ∈F n (ε� � � if (x� ) − εif (xi)) ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
if (xi). i=1 Assume a ≤ f (x) ≤ b for all f, x. In the last lecture we proved Z is concentrated around its expectation: with probability at least 1 − e−t , Z < EZ + (b − a) � 2t . n Furthermore, � EZ(x) = E sup Ef − � 1 n n � i=1 f (xi) f ∈F � � 1 n = E sup E f ∈F � f (x� ) − i � 1 n n � i=1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
. Z(x) ≤ 2R(x) + 4M � 2t . n P Z(x) ≤ 2ER(x) + 2 � � 2t n ≥ 1 − e−t . 67 Lecture 26 Comparison inequality for Rademacher processes. 18.465 If 0 ≤ f ≤ 1, then � P Z(x) ≤ 2ER(x) + ≥ 1 − e−t . � � 2t n � n i 1 Consider EεR(x) = Eε supf ∈F n where f = (f1, . . . , fn). Define contraction ϕi : R �→ R ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) = 1/2, we need to prove � � � G sup t1 + ϕ(t2) + G sup t∈T t∈T 1 2 1 2 � t1 − ϕ(t2) ≤ � � G sup t1 + t2 + G sup t∈T 1 2 � t∈T 1 2 � . t1 − t2 Assume supt∈T t1 + ϕ(t2) is attained on (t1, t2) and supt∈T t1 − ϕ(t2) is attained on (s1, s2). Then and Again, we want to show t1 + ϕ(t2) ≥ s1 + ϕ(s2)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
∈T t∈T � . � . 68 Lecture 26 Comparison inequality for Rademacher processes. 18.465 Case 3: t2 ≥ 0, s2 ≥ 0 Case 3a: s2 ≤ t2 It is enough to prove G(t1 + ϕ(t2)) + G(s1 − ϕ(s2)) ≤ G(t1 + t2) + G(s1 − s2). Note that s2 − ϕ(s2) ≥ 0 since s2 ≥ 0 and ϕ – contraction. Since |ϕ(s)| ≤ |s|, s1 − s2 ≤ s1 + ϕ(s2) ≤ t...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(s2)) − G s1 − s2 � � ≤ G t1 + t2 − G t1 + ϕ(t2) . � � Case 3a: t2 ≤ s2 Again, it’s enough to show Σ ≤ G(s1 + s2) + G(t1 − t2) G(t1 + ϕ(t2)) − G(t1 − t2) ≤ G(s1 + s2) − G(s1 − ϕ(s2)) We have t1 − t2 ≤ t1 − ϕ(t2) ≤ s1 − ϕ(s2) since s1, s2 achieves maximum and since t2 + ϕ(t2) ≥ 0 (ϕ is a contraction and t2 ≥...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
with G(s) = (s)+ . Lemma 26.1. Proof. Note that � n � � E sup � � � t∈T i=1 � � n � � � εiϕi(ti) ≤ 2E sup � � � � � � t∈T i=1 � � � εiti � � x | | − = ( )+ + ( ) = ( )+ + ( x x x − )+ x . We apply the Contraction Inequality for Rademacher processes with G(s) = (s)+ . � n � � E sup � � � t∈T i=1 � �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1] valued functions, Z = supf supf any f ∈ F unknown and to be estimated, the empirical error Z can be probabilistically bounded by R in the following way. Using the fact that Z ≤ 2R and by Martingale inequality, P Z ≤ EZ + n =1 f (xi) , and R = i �n are Rademarcher random variables. For , xn where �1, · · · · ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� P(yf (x) < 0) ≤ ≤ ≤ ���� with probability 1−e−u Eφδ(yf (x)) En n1 � φδ(yf (x)) + sup (Eφδ(yf (x)) − n f ∈F �� � Z i=1 φδ (yif (xi))) � Enφδ(yf (x)) + 2 · E sup( f ∈F � n1 � n i=1 � 2u n �iφδ(yif (xi))) + �� R � � �iyif (xi) + ≤ ���� contraction Enφδ(yf (x)) + 2 δ E sup f ∈F = ≤ Enφδ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
orov’s chaining method (Lecture 14), � � = P sup h n � = P sup n 1 � i=1 n 1 � h n i=1 �ih(xi) ≤ K �ih(xi) ≤ K � � 1 1 n 0 � � 1 � 1 n 0 log1/2 D(H, �, dx)d� + � �� u n 1 V log d� + � � �� u n ≥ 1 − e−u . 71 Lecture 27 Application of Martingale inequalities. Generalized Martingale inequaliti...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
−1 (Z xn) − Ex1 , | �� dn(xn) ··· ,xn (Z) � Ex1 ,x2 (Z|x3, · · · , xn ) + � x1 (Z|x2, · · · , xn) − �� � ··· d2 (x2 , ,xn) with the assumptions that Exi di = 0, and �di�∞ ≤ ci. We will give a generalized martingale inequality below. � n di = Z − EZ where di = di(xi, i=1 · · · , xn), maxi �di�∞ ≤ C, σi 2 = ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
+ · · · Choose λ such that λ2 2 (1−λC) · ≤ λ�, we get λ ≤ i = n, · · · , 1, we get 2� 1+2�C , and Edn exp(λdn) exp(λ�σ2 ) ≤ 1. Iterate over · n � n � � di − � n σi P � 2 ≥ t ≤ exp (−λ · t) i=1 i=1 72 Lecture 27 Application of Martingale inequalities. Generalized Martingale inequalities. 18.465 . T...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
0) = 0 and |σ(s) − σ(t)| ≤ L|s − t|, −1 ≤ σ ≤ 1. Example: Assume we have data (x1, y1), . . . , (xn, yn), −1 ≤ yi ≤ 1. We can minimize σ(x) = x − e−x e . ex + e−x 1 n n � (yi − h(xi))2 i=1 over Hk, where k is the number of layers. Define L(y, h(x)) = (y − h(x))2, 0 ≤ L(y, h(x)) ≤ 4. We want to bound EL(y, h...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
8 · (2L Aj ) � � � � � k � j=1 · E sup h∈H � � 1 � � � n n � εih(xi) i=1 � � � + � � 8 .√ n Proof. Since −2 ≤ y − h(x) ≤ 2, (y−h(x))2 4 : [−2, 2] �→ R is a contraction because largest derivative of s on 2 [−2, 2] is 4. Hence, E � � 1 � sup � � n h∈Hk (A1,...,Ak ) n � i=1 � � εi(yi − h(xi))2 � � �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
h∈Hk (A1,...,Ak ) n � i=1 εih(xi) � � � � � 74 Lecture 28 Generalization bounds for neural networks. 18.465 Furthermore, � � 1 E � � � n n � i=1 � � � εiyi � � ⎛ ⎝E � 1 n ≤ n � εiyi i=1 ⎞�2 1/2 ⎠ � = E n � i=1 1 n2 ε2 2 i y i �1/2 Using the fact that σ/L is a contraction, = � �1/2 Ey 2 1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
εihj (xi) |αj | n � � α� j n � j i=1 εihj (xi) �� � � � � � where αj � = P αj j |αj | . Since � |j αj | ≤ Ak for the layer k, 2LEε � � � sup � � � h∈Hk (A1 ,...,Ak) � |αj | n � � α� j n � j i=1 � � α� j n � εihj (xi) εihj (xi) �� � � � � � �� � � � � � � � 1 � sup � n � � h∈Hk (A1,...,Ak ) ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� i=1 � � � + εih(xi) � � 8 ,√ n � 75 Lecture 29 Generalization bounds for neural networks. 18.465 In Lecture 28 we proved E � � 1 � sup � � n h∈Hk (A1,...,Ak ) Hence, n � i=1 εi(yi − h(xi))2 ≤ 8 (2LAj ) � � � � � k � j=1 E sup · h∈H � � 1 � � � n n � i=1 εih(xi) + � � � � � 8 √ n � � EL...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, ε, dx)dε + K � � � � � t 1 n n where Furthermore, ⎛ Pε ⎝ h∀ ∈ H, � � 1 � � � n n � i=1 � � � εih(xi) � � dx(f, g) = � 1 n n � (f (xi) − g(xi))2 �1/2 . i=1 ≤ √ K n � √ 1 n P n i=1 h2(xi) log1/2 0 D(H, ε, dx)dε + K � � � � � t 1 n n Since −1 ≤ h ≤ 1 for all h ∈ H, n � h2(xi) �⎞ ⎠ i=1 ≥ 1 −...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
dε � 1 � log dε ≤ K √ V 2 ε Let ξ ≥ 0 be a random variable. Then 0 ∞ Eξ = � 0 � a 0 P (ξ ≥ t) dt = � ∞ ≤ a + P (ξ ≥ t) dt = a + a 0 � P (ξ ≥ t) dt + ∞ P (ξ ≥ t) dt � a ∞ P (ξ ≥ a + u) du � Let K V n = a and K n = u. Then e−t = e− � t 2 nu K2 . Hence, we have � ∞ � n � � � 1 � Eε sup...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 Hk(�1, . . . , �k) = � � Hk(A1, . . . , Ak) : Aj ∈ (2−�j −1 , 2−�j ] . Then the empirical process Z (Hk(�1, . . . , �k)) ≤ K k � j=1 (2L · 2−�j ) · � 8 + √ n � t n + 8 V n with probability at least 1 − e−t . For a given sequence (�1, . . . , �k), redefine t as t + 2 � �j = 0. k =1 log |wj | where...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
e−t � = 1 − 1 + 2 �k π2 6 e−t ≥ 1 − 5k e−t = 1 − e−u for t = u + k log 5. Hence, with probability at least 1 − e−u , ∀(�1, . . . , �k), Z (Hk(�1, . . . , �k)) ≤ K (2L · 2−�j ) · � V n 8 + √ n + 8 k � j=1 | + k log 5 + u j=1 log |wj n . � 2 �k If Aj ∈ (2−�j −1 , 2−�j ], then −�j − 1 ≤ log Aj ≤ �j and...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
i.i.d. and y1, . . . , yn = ±1 for classification and [−1, 1] for regression. Assume we have a kernel K(x, y) = ∞ λiφi(x)φi(y), λi > 0. � i=1 Consider a map � x ∈ X �→ φ(x) = ( λ1φ1(x), . . . , λkφk(x), . . .) = ( λkφk(x))k≥1 ∈ H � � where H is a Hilbert space. Consider the scalar product in H: (u, v)H = For x,...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
outputs a classifier from F (or FH), f (x) = (w, φ(x))H. Then, as in Lecture 20, P (yf (x) ≤ 0) ≤ Eϕδ (yf (x)) = ≤ n 1 � n i=1 n 1 � n i=1 � ϕδ (yif (xi)) + Eϕδ (yf (x)) − � ϕδ (yif (xi)) n 1 � n i=1 � I(yif (xi) ≤ δ) + sup Eϕδ (yf (x)) − f ∈F � ϕδ (yif (xi)) n 1 � n i=1 By McDiarmid’s inequality, w...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= = = = = n � � � � � i=1 4 δn EK(xi, xi) = � 4 EK(x1, x1) δ n Putting everything together, with probability at least 1 − e−t , P (yf (x) ≤ 0) ≤ � n1 n i=1 I(yif (xi) ≤ δ) + � 4 EK(x1, x1) δ n � + 2t n . Before the contraction step, we could have used Martingale method again to have Eε only. The...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
The loss class is defined as N (F, ε, dx) ≤ N (F, ε, d). L(y, F ) = {(y − f (x))2, f ∈ F}. Suppose |y − f (x)| ≤ M . Then So, and |(y − f (x))2 − (y − g(x))2| ≤ 2M |f (x) − g(x)| ≤ ε. N (L(y, F), ε, dx) ≤ N F, � � , dx ε 2M log N (L(y, F), ε, dx) ≤ � � 2d � = h 2M Ch ε �α 2M Ch ε d < 2 (see Homework 2, ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 Optimistic VC inequality for random classes of sets. 18.465 Theorem 31.2. Consider event � P sup C∈C(x1,...,xn) P (C) − n 1 n =1 I(xi ∈ C) � � i P (C) � ≥ t ≤ 4G(2n)e− 4 2 nt � Ax = x = (x1, . . . , xn) : sup C∈C(x1,...,xn) So, there exists Cx ∈ C(x1, . . . , xn) such that P (C) − n 1 n =1 I(x...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1, . . . , xn � ) e− nt · 4 2 = 4G(2n)e− nt 4 2 83 Lecture 32 Applications of random VC inequality to voting algorithms and SVM. 18.465 Recall that the solution of SVM is f (x) = {−1, 1}. The label is predicted by sign(f (x)) and P (yf (x) ≤ 0) is misclassification error. � n =1 αiK(xi, x), where (x1, y1), . ....
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) + P (yf (x) ≤ 0, yg(x) > δ) ⎛ ≤ P (yg(x) ≤ δ) + Ex,yPY ⎝y 1 k k � ⎞ Yj (x) > δ, yEY Y1(x) ≤ 0⎠ j=1 ⎛ 1 ≤ P (yg(x) ≤ δ) + Ex,yPY ⎝ k k � ⎞ (yYj (x) − E(yYj (x))) ≥ δ⎠ j=1 ≤ (by Hoeffding) P (yg(x) ≤ δ) + Ex,ye−kδ2 /2 = P (yg(x) ≤ δ) + e−kδ2 /2 = EY Px,y (yg(x) ≤ δ) + e−kδ2 /2 Similarly to what we did be...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
8.465 In the last lecture, we proved � Px,y sup C∈C where P (C) − n 1 � n =1 I(xi ∈ C) � i P (C) � 2nt ≥ t ≤ 4G(2n)e− 2 How many different g’s are there? At most card Fk ≤ N (n)k . For a fixed g, G(n) = E�C(x1,...,xn)(x1, . . . , xn). card {{yg(x) ≤ δ} ∩ {x1, . . . , xn}, δ ∈ [−1, 1]} ≤ (n + 1). Indeed,...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(8n + 4)) . In particular, � � n P (yg(x) ≤ δ) I − i=1 P δ (yg(x) ≤ 1 n �2 yig(xi) ≤ δ) ( ) ≤ 2 n (u + k log N (2n) + log(8n + 4)) . y)2 Since (x− x is convex with respect to (x, y), (32.1) Recall that (32.2) � EY Px,y (yg(x) ≤ δ) − EY �n 1 n I(yig(xi) ≤ δ) �2 � ≤ EY P (yg(x) ≤ δ) − 1 n n I(yig(x...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Hence, � P (yf (x) ≤ 0) − 2 � 1 n − n n I(yif (xi) ≤ 2δ) i=1 P (yf (x) ≤ 0) − �2 2 n with probability at least 1 − e−u . Recall that for SVM, N (n) = card {±K(xi, x)} ≤ 2n. ≤ � 2 n u + 2 log n δ2 � log N (2n) + log(8n + 4) 86 Lecture 33 Talagrand’s convex-hull distance inequality. 18.465 Lemma 33....
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
2 − e 4 . 1 Let (log r)2 − log r − 2(log r)2 − log(2 − r) ≤ 0 f (r) = log(2 − r) + log r + (log r)2 . Is f (r) ≥ 0? Enough to prove f �(r) ≤ 0. Is Enough to show (rf �(r))� ≥ 0: f �(r) = − 1 2 − r + 1 r + 2 log r · 1 r ≤ 0. rf �(r) = − r 2 − r + 1 + 2 log r ≤ 0. (rf �(r))� = 2 r − − 2 (2 r + r )2...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
∈ A s.t. if si = 0 then xi = yi} x = ( x1, x2, . . . , xn) = =� . . . = y = ( y1, y2, . . . , yn) s = ( 0, 1, . . . , 0) Note that it can happen that xi = yi but si = 0 . (2) U (A, x) = conv V (A, x) = (3) d(A, x) = minu∈U (A,x) |u|2 = minu∈U (A,x) �� λiui , ui = (u1 � i , . . . , un u2 i i ) ∈ V (A, ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) P (A) . Let x = (x1, . . . , xn, xn+1) = (z, xn+1). Define A(xn+1) = {(y1, . . . , yn) : (y1, . . . , yn, xn+1) ∈ A} and B = {(y1, . . . , yn) : ∃yn+1, (y1, . . . , yn, yn+1) ∈ A} One can verify that and Take 0 ≤ λ ≤ 1. Then s ∈ U (A(xn+1, z)) ⇒ (s, 0) ∈ U (A, (z, xn+1)) t ∈ U (B, z) ⇒ (t, 1) ∈ U (A, (z, xn+...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
d(A,(z,xn+1))dP n(z)dP (xn+1). e Then inner integral is � X n 1 4 d(A,(z,xn+1))dP n(z) ≤ e � 4 (λd(A(xn+1),z)+(1−λ)d(B,z)+(1−λ)2)dP n(z) 1 e X n = e 4 (1−λ)2 1 � 1 d(A(xn+1),z))λ+( 1 e( 4 4 d(B,z))(1−λ)dP n(z) We now use H ¨older’s inequality: � �� f gdP ≤ �1/p �� f pdP �1/q gq dP 1 where + = 1 p 1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
−λ)2 � 1 P n(A(xn+1)) �−λ P n(B) 1 � ≤ P n(B) 2 − P n(A(xn+1)) � . P n(B) 90 Lecture 33 Talagrand’s convex-hull distance inequality. 18.465 Now, integrate over the last coordinate. When averaging over xn+1, we get measure of A. � 1 4 d(A,x)dP n+1(x) = e � � 1 4 d(A,(z,xn+1))dP n(z)dP (xn+1) e � 2 −...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1, x2, . . . , xn) = =� . . . = y = ( y1, y2, . . . , yn) s = ( 0, 1, . . . , 0) Build conv V (A, x) = U (A, x). Finally, d(A, x) = min{|x − u|2; u ∈ conv A} Theorem 34.1. Consider a convex and Lipschitz f : Rn �→ R, |f (x) − f (y)| ≤ L|x − y|, ∀x, y ∈ Rn . Then and � f (x1, . . . , xn) ≥ M + L P √ � t ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
x) ≤ t, i.e. complement of event E. Then |x − u0| = Hence, |f (x) − f (u0)| ≤ L|x − u0| ≤ L √ t. � d(A, x) ≤ √ t. 92 xconv Ad Lecture 34 Consequences of Talagrand’s convex-hull distance inequality. 18.465 So, f (x) ≤ f (u0) + L √ t. What is f (u0)? We know that u0 ∈ conv A, so u0 = � λiai, ai ∈ A, and � λ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≥ M ) ≤ 1 � P f ≤ M − L √ � e−t/4 t , � P f (x) ≤ M − L √ � t ≤ 2e−t/4 . Example 34.1. Let H ⊆ Rn be a bounded set. Let � n � � � f (x1, . . . , xn) = sup � � h∈H i=1 � � � hixi . � � Let’s check: (1) convexity: � n � � � f (λx + (1 − λ)y) = sup � � h∈H i=1 � n � � � λ = sup � � h∈H � � � hi(λ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � � � � n � � � � � = sup � � � � h∈H i=1 � n � � � � � h∈H i=1 ≤ sup � � � hixi � � − sup h∈H � � � hi(xi − yi) � � ≤ (by Cauchy-Schwartz) �� �� 2 hi (xi − yi)2 sup h∈H = |x − y| �� h2 i � sup h∈H � �� L=Lipschitz constant Theorem 34.2. If M is the median of f (x1, . . . , xn), and x1, . ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�→ R}, not necessarily bounded. Define Z(x) = Z(x1, . . . , xn) = sup f ∈F � f (xi) (or supf ∈F | f (xi)|). � Example 35.1. f → 1 (f − Ef ). Z(x) = supf ∈F n 1 n � n f (xi) − Ef . i=1 Consider x� = (x� 1, . . . , x� n), an independent copy of x. Let V (x) = Ex� sup f ∈F (f (xi) − f (x� ))2 i n � i=1 be...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf