text
stringlengths
16
3.88k
source
stringlengths
60
201
i = 1, αiyi = 0. Let αiyi = 0. In other words, α0 corresponds to , n+1} and α� corresponds to the support vector αiyixi� 2. Let α0 = argmaxαw(α) subject to αi ≥ 0 and � αiyi = 0, and ψ = � αiyixi. Since the � � � 1 2 1 1 classifier trained from {(xi, yi) : i = 1, · · · , p − 1, p + 1, · · · 1 ↓ · · · , ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
the margin m(α0), 7 p · Lecture 04 Generalization error of SVM. 18.465 and w(α�) ≤ w(α0). On the other hand, the hyperplane determined by α0 − α0 γ might not separate (xi, yi) for i =� p and corresponds to a equivalent or larger “margin” 1/�ψ(α0 − α0 γ)� than m(α�)). p · p · Let us consider the inequality max...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
yp · ψ� ∗ xp)2 . �xp�2 2 For the right hand side, w(α0 − αp 0 · γ) = = � 0 − αp αi 0 − 1 � � 2 � 0 yixi −α0 ypxp�2 αi p �� ψ0 � � i − α0 α0 p − 1 �ψ0�2 + αp 2 � �2 α0 p 1 2 �xp�2 0 ypψ0 ∗ xp − � αp 0�2 1 2 �xp�2 = w(α0) − α0 p(1 − yp · ψ0 ∗ xp) − 1 � 2 = w(α0) − �xp�2 . 0�2 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
18.465 For a fixed f ∈ F, if we observe 1 n the Law of Large Numbers, � n I (f (Xi) = Yi) is small, can we say that P (f (X) = Y ) is small? By i=1 � n1 � n i=1 I (f (Xi) = Yi) EI(f (X) = Y ) = P (f (X) = Y ) . → � � The Central Limit Theorem says n � � √ 1 n n i=1 I (f (Xi) = Yi) − EI(f (X) = Y ) � √ ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, EZ 2 = σ2 , |Z| < M = const, Z1, · · · , Zn independent copies of Z, and t ≥ 0. Then where φ(x) = (1 + x) log(1 + x) − x. i=1 Proof. Since Zi are i.i.d., � n � P � � Zi ≥ t ≤ exp − � nσ2 M 2 φ �� tM nσ2 , � n � P � Zi ≥ t ≤ e−λtEe λ i=1 Zi = e−λt P n n � Ee λZi = e−λt Ee λZ � �n i=1 i=1 ....
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≤ e−λt exp � nσ2 M 2 � � e λM − 1 − λM � � = exp −λt + nσ2 M 2 � e λM − 1 − λM � � Now, minimize the above bound with respect to λ. Taking derivative w.r.t. λ and setting it to zero: −t + nσ2 M 2 � M eλM − M = 0 � e λM = + 1 tM nσ2 � log 1 + � . tM nσ2 λ = 1 M The bound becomes � n ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
copies of X, and t ≥ 0. Then � n � P i=1 � � Xi ≥ t ≤ exp − � nσ2 M 2 φ �� tM nσ2 , where φ(x) = (1 + x) log(1 + x) − x. 2 If X is small, φ(x) = (1 + x)(x − x 2 + · · · ) − x = x + x2 − x 2 − x + · · · = x 2 + · · · . 2 2 If X is large, φ(x) ∼ x log x. We can weaken the bound by decreasing φ(x). Ta...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� e−u ≥ 1 − e−u i=1 √ b, � P n � i=1 Xi ≤ √ 2nσ2u + � 2uM 3 ≥ 1 − e−u For non-centered Xi, replace Xi with Xi − EX or EX − Xi. Then |Xi − EX| ≤ 2M and so with high probability Normalizing by n, and � (Xi − EX) ≤ √ 2nσ2u + 4uM 3 . 1 � n Xi − EX ≤ � 2σ2u n + 4uM 3n EX − 1 � n Xi ≤ � 2σ2u n...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
In Bernstein’s inequality take ��Xi i �� to 1 � � � EI(f (Xi) = Yi) − n1 � n i=1 I (f (Xi) = Yi) ≤ � � 2P (f (Xi) = Yi) (1 − P (f (Xi) = Yi))u n � � + 2u 3n because EI(f (Xi) = Yi) = P (f (Xi) = Yi) = EI 2 and therefore Var(I) = σ2 = EI 2 − (EI)2 . Thus, � � P (f (Xi) �= Yi) ≤ I (f (Xi) �= Yi) + n1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
t ≥ 0, � P n � i=1 � � εiai ≥ t ≤ exp − � t2 � . n 2 2 i=1 ai Proof. Similarly to the proof of Bennett’s inequality (Lecture 5), � n � ε P i=1 � t≥a i i ≤ � λ exp − e λt E � � n ε i a i n = e−λt � E exp ( λε i . a ) i i=1 i=1 Using inequality x −x +e e 2 2 x≤ e /2 (from Taylor expansion),...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− e−u i . � n � i=1 ai = Var( n 2 i=1 i=1 i=1 εiai). Rademacher sums will play important role in future. Consider again the problem of estimating Ef . We will see that by the Symmetrization technique, 1 n n � i=1 f (Xi) − Ef ∼ 1 n n � i=1 f (Xi) − 1 n n � f (Xi �). i=1 � n i=1 1 n f (Xi)− I...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
i=1 n � f (Xi �) − Ef i=1 � � � � � 13 Lecture 07 Hoeffding, Hoeffding-Chernoff, and Khinchine inequalities. 18.465 while for the first inequality we use Jensen’s inequality: � � 1 E � � � n n � i=1 � � f (Xi) − Ef � � � � � 1 = E � � � n n � f (Xi) − 1 n n � i=1 Ef (Xi � � �) � � � ≤ EX EX � ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= eλ(x·1+(1−x)·0) ≤ xeλ + (1 − x)eλ·0 = 1 − x + xeλ. Hence, Ee λX = 1 − EX + EXeλ = 1 − µ + µe λ . Again, we minimize the following bound with respect to λ > 0: � P n � X ≥ n µ ( )+ t i � i=1 Take derivative w.r.t. λ: e−λn(µ+t)Ee λ Xi P e−λn(µ+t) Ee λX � �n � e−λn(µ+t) 1 − µ + µe λ �n ≤ = ≤ −n(µ + ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
��, and Khinchine inequalities. 18.465 completing the proof. Moreover, � P µ − 1 n n � � Xi ≥ t = P i=1 � 1 n n � i=1 � Zi − µZ ≥ t ≤ e−nD(µz +t,µZ ) = e−nD(1−µX +t,1−µX ) where Zi = 1 − Xi (and thus µZ = 1 − µX ). � If 0 < µ ≤ 1/2, Hence, we get Solving for t, D(1 − µ + t, 1 − µ) ≥ t2 . 2µ(1 − µ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
i = 1) = P(�i = −1) = 0 5, and 0 < p < ∞. Then . · · · , an ∈ R, �i, · · · , �n be i.i.d. Rademacher random variables: � n � i=1 �1/2 2 ≤ � � n � � E � � � i=1 � p�1/p � � ai�i � � ≤ Bp · 2 |ai| � n � i=1 �1/2 Ap · |ai| for some constants Ap and Bp depending on p. Proof. Let � |ai| 2 = 1 wi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � � � � 2 ≤ E � � ai�i � � 3 p+(2− 2 2 � � � E � � ai�i � � = 3 p) ≤ ≤ � � � 2 � � � � � � p 3 6−2p E E � � � � ai�i ai�i � � � � � p 2 � � � � 3 E � � · ai�i � � (B6−2p )2− p 2 3 . Thus E | ai�i| p ≤ (B6−2p)6−2p, completing the proof. � � 1 3 , Holder’s inequality � 16 Lecture 08 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
of sets uniformGC(F). Let C = {C ⊆ X}, fC (x) = I(x ∈ C). The most pessimistic value is sup E sup Pn (C) − P (C) P C∈C | | → 0. For any sample {x1, . . . , xn}, we can look at the ways that C intersects with the sample: {C ∩ {x1, . . . , xn} : C ∈ C}. Let �n(C, x1, . . . , xn) = card {C ∩ {x1, . . . , xn} : C...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
V. Hence, C will pick out only very few subsets out of 2n (because V � en �V ∼ nV ). Lemma 8.3. The number �n(C, x1, . . . , xn) of subsets picked out by C is bounded by the number of subsets shattered by C. Proof. Without loss of generality, we restrict C to C := {C ∩ {x1, . . . , xn} : C ∈ C}, and we have card(C...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
means that xi ∈ Ti(C), and that C\{xi} ∈ C. Thus both B {xi} and B\{xi} are picked out by C.  Since either B = B {xi} or B = B\{xi}, B is picked out by C. Thus A is shattered by C. Apply the operator T = T1 ◦ . . . Tn until T k+1(C) = T k(C). This will happen for at most � C∈C card(C)  times, since card(Ti(C)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
is at most d. Proof. Observation: For any {x1, . . . , xd+1} if we cannot shatter {x1, . . . , xd+1} ←→ ∃I ⊆ {1 . . . d + 1} s.t. we cannot pick out {xi, i ∈ I}. If we can pick out {xi, i ∈ I}, then for some C ∈ C there are α1, . . . , αd s.t. �d �d k=1 αkfk(x) > 0 for i ∈ I and k=1 αkfk(x) ≤ 0 for i /∈ I. Denot...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
d � αkfk(x1) + . . . + φd+1 d � αkfk(xd+1) = 0. φ1 k=1 k=1 Hence, Contradiction. d � � � φi � k=1 i∈I � � i /∈I αkfk(xi) = �� >0 � (−φi) � �� � ≥0 � k=1 αkfk(xi) �� ≤0 � d � � . � � • Half-spaces in Rd: {{α1x1 + . . . + αdxd + αd+1 > 0} : α1, . . . , αd+1 ∈ R}. By setting f1 = x1, ....
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
. . . , xn) ≤ �n(C, x1, . . . , xn)�n(C ∩ D, x1, . . . , xn) � �V � �V ≤ en V C en V C D D ≤ 2n for large enough n. (b) (C ∪ D) = (C c ∩ Dc)c, and the result follows from (1) and (2). Example 9.1. Decision trees on Rd with linear decision rules: {C1 ∩ . . . C�} is VC and  leaves{C1 ∩ . . . C�} is VC. ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� sup � � n C∈C � occurs. Let X = (X1, . . . , Xn) ∈ {supC∈C � 1 n n � I(Xi ∈ C) − P (C) ≥ t � � � � � � � I(Xi ∈ C) − P (C) i=1 � n i=1 n � � � 1 � � � n i=1 Then ≥ t}. � � I(Xi ∈ CX ) − P (CX ) � � � ≥ t. ∃CX such that For a fixed C, PX � �� � 1 � � � n n � i=1 I(Xi � � ∈ C) − P (C) ≥ t/2...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � 1 � � � n n � i=1 I(Xi � ∈ CX ) − P (CX ) ≤ t/2 ∃CX ≥ 1/2 � � � � � � � � � � � 2/n. Assume that the event � � 1 � � � n n � i=1 I(Xi � ∈ CX ) − P (CX ) ≤ t/2 � � 1 � � � n n � i=1 I(Xi ∈ CX ) − P (CX ) ≥ t. � � � � � � � � � � occurs. Recall that Hence, it must be that � � 1 � � � n...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
n C∈C n � i=1 I(Xi ∈ C) − 1 n Theorem 10.1. If V C C ( ) = V , then � � � � � � I(Xi ∈ C) − P (C) ≥ t n � i=1 I(Xi � � � ∈ C) ≥ t/2 � � � � . P Proof. � � � 1 � sup � � n C∈C n � i=1 � I(Xi ∈ C) − P (C) ≥ t � � � � � � �V 2en V ≤ 4 2nt 8 e− . I(Xi ∈ C) − � n1 n i=1 I(Xi εi (I(Xi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
names (i.e. introducing The first equality is due to the fact that Xi and Xi random signs εi, P (εi = ±1) = 1/2) does not have any effect. In the last line, it’s important to see that the probability is taken with respect to εi’s, while Xi and Xi By Sauer’s lemma, �’s are fixed. �2n (C, X1, . . . , Xn, X1 � , . . . , ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
∈ Ck)) ≥ t/2 � � � � � ≤ 2E 2 exp − N � k=1 Hoeffding’s inequality −n2t2 (I(Xi ∈ C) − I(Xi � n 8 i=1 � ∈ C))2 N � � 2 exp − ≤ 2E k=1 � 2t2 −n 8n � �V 2en V ≤ 2 2nt 2e− 8 . � 2en V �V . Hence, � 23 Lecture 11 Optimistic VC inequality. 18.465 Last time we proved the Pessimistic VC in...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= ±1. These examples are labeled according to some unknown C0 such that Y = 1 if X = C0 and Y = 0 if X /∈ C0. Let C = {C : C ⊆ X }, a set of classifiers. C makes a mistake if Similarly to last lecture, we can derive bounds on X ∈ C \ C0 ∪ C0 \ C = C�C0. � � n1 � � sup � � n C i=1 I(Xi ∈ C�C0) − P (C�C0) � � � �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≥ 1 n � ∈/ C) = (1 − P (C))n can be as close to 0 as we want. n I(Xi since i=1 1 n n P (Xi i=1 n � i=1 � � ∈ C) ≥ nP (C) ≥ 1. Otherwise P ( � n i=1 I(Xi � ∈ C) = 0) = Similarly to the proof of the previous lecture, let � (Xi) ∈ sup C P (C) − I(Xi ∈ C) � n 1 i=1 n � P (C) � . ≥ t 24 Lecture 11 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� i=1 � εi (I(Xi � � � � 1 t � � ∈ Ck) − I(Xi ∈ Ck)) ≥ √ n 2 � i=1 n ≤ E exp − N � k=1 � t2 1 n 1 2 2 2 n n i=1 � (I(Xi ∈ Ck) + I(Xi (I(Xi � ∈ Ck)) � ∈ Ck) − I(Xi ∈ Ck))2 (I(Xi ∈ Ck) + I(Xi ⎞ � ∈ Ck))⎠ since upper sum in the exponent is bigger than the lower sum (compare term-by-term) ≤ E 2 e...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
C ∈ C}. Then F(C) is VC-subgraph class if and only if C is a VC class of sets. Assume d functions are fixed: {f1, . . . , fd} : X �→ R. Let � d � F = αifi(x) : α1, . . . , αd ∈ R � . Then V C(F) ≤ d + 1. To prove this, it’s easier to use the second definition. i=1 Packing and covering numbers Let f, g ∈ F and...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≤ i ≤ N such that d(f, fi) ≤ ε. Definition 12.5. The ε-covering number, N (F, ε, d), is the minimal cardinality of an ε-cover of F. 27 Lecture 12 VC subgraph classes of functions. Packing and covering numbers. 18.465 Lemma 12.1. D(F, 2ε, d) ≤ N (F, ε, d) ≤ D(F, ε, d). Proof. To prove the first inequality, assume t...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Example 12.3. Consider the L1-ball {x ∈ Rd , |x| ≤ 1} = B1(0) and d(x, y) = |x − y|1. Then D(B1(0), ε, d) ≤ � 2 + ε ε �d � �d ≤ , 3 ε where ε ≤ 1. Indeed, let f1, . . . , fD be optimal ε-packing. Then the volume of the ball with ε/2-fattening (so that the center of small balls fall within the boundary) is More...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
ε log �V 7 ε . for some δ.) (which is ≤ K ε � �V +δ Proof. Let m = D(F, ε, d) and f1, . . . , fm be ε-separated, i.e. 1 n n � |fr(xi) − f�(xi)| > ε. i=1 Let (z1, t1), . . . , (zk, tk) be constructed in the following way: zi is chosen uniformly from x1, . . . , xn and ti is uniform on [−1, 1]. Consider fr...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
graph classes. 18.465 Substituting, P (Cfr and Cf� pick out different subsets of (z1, t1), . . . , (zk, tk)) = 1 − P ((z1, t1) is picked by both Cfr , Cf� or by neither)k � ≥ 1 − e−ε/2 �k = 1 − e−kε/2 There are 2 ways to choose fr and f�, so � � m P (All pairs Cfr and Cf� pick out different subsets of (z1, t1),...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
e ε log m 1/V s ≤ 4e ε log s. Note that s log s is increasing for s ≥ e and so for large enough s, the inequality will be violated. We now e log 7 check that the inequality is violated for s� = 8 ε � �2 7 ε 4e ε log > 4e ε � log 8e ε log � 7 ε ε . Indeed, one can show that since Hence, m1/V = s ≤ ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
that (1) ∀f, g ∈ Fj , d(f, g) > 2−j (2) ∀f ∈ F , we can find g ∈ Fj such that d(f, g) ≤ 2−j How to construct Fj+1 if we have Fj : • Fj+1 := Fj • Find f ∈ F , d(f, g) > 2−(j+1) for all g ∈ Fj+1 • Repeat until you cannot find such f Define projection πj : F �→ Fj as follows: for f ∈ F find g ∈ Fj with d(f, g) ≤ 2−j an...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
j )2 e−u j=1 � ≥ 1 − 1 22 + 1 32 + 1 42 � e−u = 1 − (π2/6 − 1)e−u ≥ 1 − e−u Recall that R(f ) = ∞ R(πj (f ) − πj−1(f )). If f is close to 0, −2k+1 < d(0, f ) ≤ 2−k . Find such a k. � j=1 Then π0(f ) = . . . = πk(f ) = 0 and so 33 Lecture 14 Kolmogorov’s chaining method. Dudley’s entropy integral. 18.46...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, d)dε � �� � Dudley’s entropy integral since 2−(k+1) < d(0, f ). � 34 log D1/2−(κ+2)22−(κ+1) Lecture 15 More symmetrization. Generalized VC inequality. 18.465 Lemma 15.1. Let ξ, ν - random variables. Assume that P (ν ≥ t) ≤ Γe−γt where Γ ≥ 1, t ≥ 0, and γ > 0. Furthermore, for all a > 0 assume that Eφ(ξ) ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
)+ = · Γ e e−γt · 1 = Γ · e · e−γt where we chose optimal a = t − 1 γ to minimize Γe−γa γ . � 35 (x-a)+a Lecture 15 More symmetrization. Generalized VC inequality. 18.465 Lemma 15.2. Let x = (x1, . . . , xn), x� = (x1 � , . . . , xn � ). If for functions ϕ1(x, x�), ϕ2(x, x�), ϕ3(x, x�) then � � P ϕ1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� ϕ2 + Ex� ϕ3t} = {sup(Ex� ϕ1 − Ex� ϕ2 − δEx� ϕ3)4δ ≥ t}. δ>0 � � �� ξ By assumption, P (ν ≥ t) ≤ Γe−γt . We want to prove P (ξ ≥ t) ≤ Γ e e−γt . By the previous lemma, we only need to check whether Eφ(ξ) ≤ Eφ(ν). · · ξ = sup Ex� (ϕ1 − ϕ2 − δϕ3)4δ δ>0 ≤ Ex� sup(ϕ1 − ϕ2 − δϕ3)4δ δ>0 = Ex� ν φ(ξ) ≤ φ(Ex� ν)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(x� ))))2 i i . �1/2 n 1 � n i=1 In Lecture 14, we proved � Pε ∀f ∈ F, n 1 � n i=1 εi(f (xi) − f (x� i)) ≤ √ 29/2 � d(0,f ) n 0 log1/2 D(F, ε, d)dε Complement of the above is � � Pε ∃f ∈ F, εi(f (xi) − f (xi � )) ≥ √ Taking expectation with respect to x, x�, we get P ∃f ∈ F, εi(f (xi) − f (xi � )) ≥ √ +2...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
To the right of ”≥” sign, only distance d(f, g) depends By Lemma 15.2 (minus technical detail ”∃f ”), � P ∃f ∈ F, Ex� n 1 � n i=1 (f (xi) − f (x� )) ≥ Ex� √ i log1/2 D(F, ε, d)dε � d(0,f ) 29/2 n 0 � +27/2 � , f )2t ≤ e · e−t , Ex� d(0 n where and Ex� 1 n n � (f (xi) − f (xi � )) = i=1 1 n n � f...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(x1, . . . , xn), D(F, ε, dx) ≤ D(F, ε) where dx(f, g) = � � n 1 i=1 n (f (xi) − g(xi))2 �1/2 Lemma 16.1. If F satisfies uniform entropy condition, then � d(0,f ) Ex� 0 log1/2 D(F, ε, d)dε ≤ E x� d(0,f )2 � √ 0 log1/2 D(F, ε/2)dε Proof. Using inequality (a + b)2 ≤ 2(a2 + b2), n 1 � n i=1 � � d(f, g) = ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, ε, d) ≤ D(F, ε/2, dx,x� ). 38 Lecture 16 Consequences of the generalized VC inequality. 18.465 � d(0,f ) Ex� 0 log1/2 D(F, ε, d)dε ≤ Ex� � d(0,f ) log1/2 D(F, ε/2, dx,x� )dε 0 � d(0,f ) ≤ log1/2 D(F, ε/2)dε 0 Let φ(x) = x log1/2 D(F, ε)dε. It is concave because φ�(x) = log1/2 D(F, ε/2) is decreasing w...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
.1. If F satisfies Uniform Entropy Condition and F = {f : X → [0, 1]}. Then � √ � � 2Ef � P ∀f ∈ F, Ef − log1/2 D(F, ε/2)dε + 27/2 ≥ 1 − e−t . n 1 � n i=1 f (xi) ≤ √ 29/2 n 0 2Ef t · n Proof. If Ef ≥ 1 n � n f (xi), then i=1 � 2 max Ef, 1 n n � � f (xi) = 2Ef. i=1 If Ef ≤ 1 n � n f (xi), i=1 and ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
F with V C(F) = V , where d(f, g) = d1(f, g) = n |f (xi) − g(xi)|. e log 7 8 ε ε n i=1 �V � Note that if f, g : X �→ [0, 1], then d2(f, g) = � 1 n n � i=1 � �1/2 � �1/2 (f (xi) − g(xi))2 ≤ |f (xi) − g(xi)| . 1 n n � i=1 Hence, ε < d2(f, g) ≤ d1(f, g) implies D(F, ε, d2) ≤ D(F, ε2, d1) ≤ � 8e ε2...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≤ 2 log 1 x 1 x x ≤ 1/e 1 ≤ log Now, check for x ≥ 1/e. � � x 1 log dε = ε � 1 � e � � x 1 log dε + ε 1 log dε ε 0 2 e + ≤ � x 1 e 1dx = + x − = x + ≤ 2x 2 e 1 e 1 e 1 e 0 Using the above result, we get � P ∀f ∈ F, Ef − n1 � n i=1 f (xi) ≤ K � α 1 Ef log Ef n � � tEf n + K ≥...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
F = conv H = � T � λihi, hi ∈ H, T � λi ≤ 1, λi ≥ 0, T ≥ 1 � . Then sign(f (x)) is the prediction of the label y. Let i=1 i=1 Fd = convd H = � d � λihi, hi ∈ H, T � λi ≤ 1, λi ≥ 0 � . Theorem 17.1. For any x = (x1, . . . , xn), if i=1 i=1 then log D(H, ε, dx) ≤ KV log 2/ε log D(convd H, ε, dx) ≤ KV...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Hence, we can approximate any f ∈ Fd by f � ∈ FD,d within ε. Now, let f = �d i=1 λihi ∈ FD,d and consider the following construction. We will choose Y1(x), . . . , Yk(x) i=1 i=1 from h1, . . . , hd according to λ1, . . . , λd: P (Yj (x) = hi(x)) = λi and P (Yj (x) = 0) = 1 − d � λi. Note that with this construc...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � � Yj − f � � � x ⎛ 1 = Edx k ⎝ k � ⎞ 2 Yj , f ⎠ ≤ ε2 . j=1 So, there exists a deterministic combination 1 �k Yj such that dx( 1 �k Yj , f ) ≤ ε. Define j=1 j=1 k k D,d F � = ⎧ ⎨ 1 ⎩ k ⎫ ⎬ Yj : k = 4/ε2 , Yj ∈ {h1, . . . , hd} ⊆ {h1, . . . , hD} ⎭ k � j=1 Hence, we can approximate any f = λihi ∈ F...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
number of such strings is � � k+d−1 . Hence, k card F � D,d ≤ ≤ = = � × � � � k + d k D d DD−dDd dd(D − d)D−d D(k + d) �d � � (k + d)k+d kkdd D �D−d � d2 D − d � D(k + d) �d � d2 1 + D − d k + d �k k d �D−d � 1 + d �k k using inequality 1 + x ≤ e x � ≤ �d D(k + d)e2 d2 where k = 4/ε2 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
integral. Theorem 18.1. Let (X , A, µ) be a measurable space, F ⊂ {f |f : X → R} be a class of measurable func­ tions with measurable square integrable envelope F (i.e., ∀x ∈ X , ∀f ∈ F, |f (x)| < F (x), and �F �2 = � � �V ( F 2dµ)1/2 < ∞), and the �-net of F satisfies N (F, ��F �2, � · �) ≤ C 1 � a constant K that...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
upper bounded (supk Ck ∨ Dk < ∞), such that (18.1) log N (convFn·kq , CkL · n−W , � · �2) ≤ Dk · n for n, k ≥ 1, and q ≥ 3 + V . This implies the theorem, since if we let k → ∞, we have log N (convF, C∞L · n−W , �·�2) ≤ � ��F �2, n = C D ∞ ∞ n. Let � = C∞C 1/V n−W , and K = D∞C V +2 C V +2 , we get C∞L n−W = C∞C ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
trivially. For general n, fix m = n/d for large enough 1 m− 1 �F � = Lm− 1 d > 1. For any f ∈ Fn, there exists a projection πmf ∈ Fm such that �f − πmf � ≤ C 0 V V V by definition of Fm. Since f ∈Fn λf · f = f ∈Fm µf · f + f ∈Fn λf · (f − πmf ), we have convFn ⊂ � � � convFm + convGn, and the number of ele...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
. It follows that EYi = Y1, · · · , Yk be i.i.d. random variables such that P (Yi = fj ) = � λj fj for all i = 1, λj for all j = 1, · · · ⎛ 1 E ⎝ k n k � � Yi − ⎞ λj fj ⎠ ≤ i=1 j=1 ⎛ 1 E ⎝Y1 − k n � j=1 · · · , k, and ⎞ λj fj ⎠ ≤ 1 k (diamF)2 . Thus at least one realization of 1 �k Yi has a dis...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(convGn , �diamGn, � · �2) ≤ · e + en � 1 16 2 m 2/V n−2W · C1 �32 C1 · −2 m 2/V n 2·W � = e e + C1 16 2d−2/V �32 C−2d2/V n 1 · By definition of Fm and and induction assumption, log N (convFm, C1L · m−W , � · �2) ≤ D1 · m. In other words, the C1L m−W -net of convFm contains at most eD1m elements. This define...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
k > 1, construct Gn,k such that convFnkq ⊂ convFn(k−1)q + convGn,k in a similar way as before. Gn,k contains at most nkq elements, and each has a norm smaller than L (n (k − 1)q)−1/V . To bound the cardinality of a Lk−2n−W -net, we set � 2L (n (k − 1)q)−1/V = Lk−2n−W , get � = 1 n−1/2 (k − 1)q/V k−2 , 46 · 2 Lectur...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
��er y = sign(f (x)) with minimum testing error. For any x, the term y f (x) is called margin can · be considered as the confidence of the prediction made by sign(f (x)). Classifiers like SVM and AdaBoost are all maximal margin classifiers. Maximizing margin means, penalizing small margin, controling the complexity of al...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
nition of dz �1/2 �1/2 n 1 � n i=1 � 1 δ n 1 � n i=1 ≤ = (yif (xi) − yi · g(xi))2 ,Lipschetz condition 1 δ dx(f (x), g(x)) ,definition of dx, and the packing numbers for φδ(yF) and F satisfies inequality D(φδ(yF), �, dz) ≤ D(F, � δ, dx). Recall that for a VC-subgraph class H, the packing number satisfies D(H, ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
H. The packing number satisfies �V � log k � D(H, �, dx) ≤ k number: D(H, �, � · �1) ≤ � k �V . � Since conv(H) satisfies the uniform entroy condition (Lecture 16) and f ∈ [−1, 1]X , with a probability . D Haussler (1995) also proved the following two inequalities related to the packing , and D(H, �, dx) ≤ K � 1 �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
K · (Enφδ + n− 2 V +1 δ− V +1 + 1 V +2 V u ) n for some constant K. We proceed to bound Eφδ for δ ∈ {δk = exp(−k) : k ∈ N}. Let exp(−uk) = � �2 1 k+1 e−u , it follows that uk = u + 2 � k∈N exp(−uk) = 1 − � 1 − � k∈N k+1 1 �2 1 log(log δ · k e−u = 1 − π2 6 · e−u < 1 − 2 e−u , · · log(k + 1) = u +...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
2 e−u , � n 1 i=1 φδ(yi · f (xi)) ≤ n � n · · · 1 · P(y · f (x)) ≤ 0) ≤ K · inf δ Pn(y · f (x) ≤ δ) + n− 2(V +1) δ− V +1 + V V +2 � 2 log(log 1 + 1) n δ � . u n + 49 Lecture 20 Bounds on the generalization error of voting classifiers. 18.465 As in the previous lecture, let H = {h : X �→ [−1, 1]}...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
the [0, 1] interval, parametrized by a and b. Then V C(H) = 2. Let F = conv H. First, rescale the functions: f = �T λihi = 2 �T λi − 1 = 2f � −1 where f � = �T λih� i, h� = hi+1 . We can generate any non-decreasing function f � such that f �(0) = 0 and f �(1) = 1. Similarly, we can generate any non-increasing f � ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
x)) − 1 n � I(yif (xi) ≤ δ) + Eϕδ (yf (x)) − n � i=1 � � ϕδ (yif (xi)) 1 n n � i=1 ϕδ (yif (xi)) By going from 1 n � n i=1 I(yif (xi) ≤ 0) to 1 n � n i=1 I(yif (xi) ≤ δ), we are penalizing small confidence predic­ tions. The margin yf (x) is a measure of the confidence of the prediction. For the sake of...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
on the generalization error of voting classifiers. 18.465 we have dx,y (ϕδ (yf (x)) , ϕδ (yg(x))) = ≤ � � (ϕδ (yif (xi)) − ϕδ (yig(xi)))2 �1/2 (yif (xi) − yig(xi))2 �1/2 n1 � n i=1 n 1 1 � δ2 n i=1 � = 1 δ n 1 � n i=1 �1/2 (f (xi) − g(xi))2 where f, g ∈ Fd. Choose ε δ-packing of Fd so that · = dx(f, ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
error of voting classifiers. 18.465 We continue to prove the lemma from Lecture 20: Lemma 21.1. Let Fd = convd H = { λihi, hi ∈ H} and fix δ ∈ (0, 1]. Then �d i=1 � ϕ¯δ Eϕ − √δ E ϕδ ≤ K �� dV log n n δ + � �� t n ≥ 1 − e−t . P ∀f ∈ Fd, Proof. We showed that log D(ϕδ (yFd) , ε/2, dx,y ) ≤ KdV log 2 . ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= x, ε = 2x . Without loss of generality, assume Eϕδ ≥ 1/n. εδ E Otherwise, we’re doing better than in Lemma: √ E ≤ n ⇒ E ≤ log n � δ log n Eϕδ k � √ √ n 0 log1/2 D(ϕδ (yFd(x)) , ε)dε ≤ K � � n . Hence, √ log dV Eϕδ n Eϕδ log n dV δ n δ 2 n So, with probability at least 1 − e−t , ≤ K Eϕδ (yf (x)) −...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
δ ∀f ∈ Fd, . . . + ≥ 1 − � δ,d e−t 6δ d2π2 = 1 − e−t . 53 � � � �⎞ ⎠ tδ,d n Lecture 21 Bounds on the generalization error of voting classifiers. 18.465 Since tδ,d = t + log d2π2 6δ , ∀f ∈ Fd, Eϕδ − ¯ϕδ Eϕδ √ ≤ K ≤ K ≤ K � � ⎛ ⎝ �� � dV log n δ n + t + log d2π2 6δ n ⎞ ⎠ dV log n...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
2 Eϕδ ≤ + � 2 2 + n 1 � n i=1 ϕδ Eϕδ ≤ 2 � ε �2 2 n 1 � + 2 n ϕδ. i=1 The bound becomes ⎛ ⎜ n ⎜ 1 � P (yf (x) ≤ 0) ≤ K ⎜ ⎝ n i=1 ⎞ I(yif (xi) ≤ δ) + ⎟ n t dV ⎟ log + ⎟ n n ⎠ δ � �� � (*) where K is a rough constant. (*) not satisfactory because in boosting the bound should get better when t...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(x) ≤ δ) + � �� inc. with δ � V min(T, (log n)/δ2) log n n �� dec. with δ ⎞ t ⎟ δ + ⎟ ⎠ . n � Proof. Let f = � T =1 λihi, g = k i 1 � k =1 Yj , where j P (Yj = hi) = λi and P (Yj = 0) = 1 − T � λi as in Lecture 17. Then EYj (x) = f (x). i=1 P (yf (x) ≤ 0) = P (yf (x) ≤ 0, yg(x) ≤ δ) + P (yf (x) ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
+ � δ � 2 � EYj � ≤ ⎞ 1 ⎠ 2 ≤ (by Hoeffding’s ineq.) Exe−kD(EY1 δ �+ 2 ,EY1 �) because D(p, q) ≥ 2(p − q)2 (KL-divergence for binomial variables, Homework 1) and, hence, ≤ Exe−kδ2/2 = e−kδ2/2 � D EY1 � + δ , EY1 2 � � ≥ 2 � �2 δ 2 = δ2/2. We therefore obtain (22.1) P (yf (x) ≤ 0) ≤ P (yg(x) ≤ δ) + e...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(x)) � � t n + V k log n δ n ≤ K = ε/2. y is increasing with x and decreasing with y. Note that Φ(x, y) = x√− x By inequalities (22.1) and (22.2), and Eϕδ (yg(x)) ≥ P (yg(x) ≤ δ) ≥ P (yf (x) ≤ 0) − 1 n 1 n n � ϕδ (yig(xi)) ≤ Pn (yg(x) ≤ 2δ) ≤ Pn (yf (x) ≤ 3δ) + i=1 By decreasing x and increasing y in Φ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
18.465 Let f = �T λihi, where λ1 ≥ λ2 ≥ . . . ≥ λT ≥ 0. Rewrite f as i=1 d � λihi + T � f = λihi = d � λihi + γ(d) T � λ� ihi i=1 i=d+1 i=1 i=d+1 where γ(d) = � T =d+1 λi and λ� i i = λi/γ(d). Consider the following random approximation of f , where, as in the previous lectures, i=1 d � g = λihi +...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
2 � = yYj +1 ∈ [0, 1] and applying Hoeffding’s inequality, we get ⎛ ⎛ 1 PY ⎝γ(d)y ⎝ k Hence, ⎞ ⎛ � 1 Yj (x) − EY ⎠ ≥ δ � (x, y)⎠ = PY ⎝ k ⎞ k � j=1 k � Yj �(x) − EY1 � ≥ j=1 ≤ e− 2kδ 2γ(d)2 . P (yf (x) ≤ 0) ≤ P (yg(x) ≤ δ) + e− kδ 2 2γ2(d) . If we set e− 2 kδ 2γ(d )2 = n 1 , then k = 2γ δ 2 (d) lo...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
define γ(d, f ) = �T i=d+1 λi. Then with probability at least 1 − e−t , P (yf (x) ≤ 0) ≤ � � ε + Pn (yf (x) ≤ δ) + ε2 �2 inf δ∈(0,1) where �� · V e(f, δ) n n log + δ � � t n ε = K Example 23.1. Consider the zero-error case. Define δ∗ = sup{δ > 0, Pn (yf (x) ≤ δ) = 0}. Hence, Pn (yf (x) ≤ δ∗) = 0 for c...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
α log n δ2d2α−1 = 0 d = Kα · log1/(2α−1) n δ2/(2α−1) ≤ K log n . δ2/(2α−1) 59 Lecture 23 Bounds in terms of sparsity. 18.465 Hence, Plugging in, e(f, δ) ≤ K log n δ2/(2α−1) P (yf (x) ≤ 0) ≤ K � V log n n(δ∗)2/(2α−1) n log + δ∗ � . t n As α → ∞, the bound behaves like V log n n log n . δ∗ 60...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= var(Zk) = k=1 Yk g = c − EYk k=1 Zk. Then EZk = Eg = f . We define σc c)2 . (If we define {Yk}k=1, h∈c λc var(Yk ∀h ∈ H, P(Yk = h) = λh, and define g = 1 �N Yk, we might get much larger var(Yk)). �N 1 c = N c�2 = � 1 c αc N c) = �Yk c . Then EYk c = h) = λh h (h − EYh c αcYk h h. Let Zk = c αc ,N be random v...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
x)|yf (x) ≤ 0, σc 2 < r � � � 1 N N � k=1 � N1 � N � exp − k=1 ≤ P (yZk − EyZk) > δ|yf (x) ≤ 0, σ2 < r c ,Bernstein’s inequality � N 2δ2 2N σ2 + 2 N δ 2 3 · �� N 2δ2 N 2δ2 , 4N σ2 N δ c 8 3 ≤ ≤ ≤ c � � � exp − min � exp − N δ2 4r , for r small enough (24.2) set 1 ≤ n . As...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� n δ with probability at least 1 − e−u . Using a technique developed earlier in this course, and taking the union bound over all m, δ, we get, with probability at least 1 − Ke−u , P(yg ≤ δ) ≤ K inf Pn(yg ≤ 2δ) + � m,δ · V Nm n n log + δ � . u n (Since EPn(yg ≤ 2δ) ≤ EPn(yf (x) ≤ 3δ) + EPn(σc the same ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1) , Zk (2) ≤ 1 ,and Zk (1) − Zk � � �2 (2) ≤ 4 � ≤ E Zk (1) − Zk (2) �2 = 2σ2 c var (1,2) σ2 N ≤ Y1, ··· ,N 2 . · 2 σc 62 sϕ (s)δ12δδ Lecture 24 Bounds in terms of sparsity (example). 18.465 We start with PY1, ··· ,N (σ2 c ≥ 4r) ≤ P (1,2) Y1, ··· ,N (σ2 N ≥ 3r) + P (1,2) (σ2 σ2 c ≥ 4r| N...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) ≤ KV log 2 by the assumption of our problem, we have log D(convNm {hi · hj : hi, hj ∈ H}, �) ≤ KV · Nm · log 2 � by the VC inequality, and � � Eφr(σ2 N ) − En � � φr(σ2 ) / Eφr(σ2 ) N N ≤ �� K n V Nm log /n + u/n r � · � with probability at least 1 − e−u . Using a technique developed earlier in this co...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
25.1) |Z(x1, . . . , xn) − Z(x1, . . . , xi−1, xi � , xi+1, . . . , xn)| ≤ ci. Decompose Z − EZ as follows Z(x1, . . . , xn) − Ex� Z(x1 � , . . . , x� n) = (Z(x1, . . . , xn) − Ex� Z(x1 � , x2, . . . , xn)) + (Ex� Z(x� 1, x2, . . . , xn) − Ex� Z(x� 1, x2 � , x3, . . . , xn)) . . . � + Ex� Z(x� � 1, . . . , xn −1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− 2 e−λ = e−λ eλ + 2 + s e−λ eλ − 2 ≤ e λ2 /2 + s · sh(x) using Taylor expansion. Now use Zi ci = s, where, by assumption, −1 ≤ Zi ci ≤ 1. Then e λZi = e λci · Zi ci ≤ e λ2 c 2 i /2 + Zi ci sh(λci). Since Exi Zi = 0, Exi e λZi ≤ e λ2 c 2 i /2 . We now prove McDiarmid’s inequality � 64 Lecture 2...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� e λ2 (c 2 λ2 P n ≤ e i=1 c 2 i /2 Hence, P (Z − EZ > t) ≤ e−λt+λ2 P n i=1 ci /2 2 and we minimize over λ > 0 to get the result of the theorem. � Example 25.1. Let F be a class of functions: X �→ [a, b]. Define the empirical process Then, for any i, � � Ef − � Z(x1, . . . , xn) = sup � � f ∈F 1 n n � f ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= b−a for all i, � � n P (Z − EZ > t) ≤ exp − t2 � 2 n i=1 n (b−a)2 2 = e− 2nt 2(b−a)2 . By setting t = � 2u (b − a), we get n � P Z − EZ > � (b − a) ≤ e−u . � 2u n Let ε1, . . . , εn be i.i.d. such that P (ε = ±1) = 1 . Define 2 � � � Z((ε1, x1), . . . , (εn, xn)) = sup � � f ∈F 1 n Then, for ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
e−u . � 8u n � 8u n 66 Lecture 26 Comparison inequality for Rademacher processes. 18.465 Define the following processes: � Z(x) = sup Ef − f ∈F 1 n � n � i=1 f (xi) and R(x) = sup 1 f ∈F n n � εif (xi). i=1 Assume a ≤ f (x) ≤ b for all f, x. In the last lecture we proved Z is concentrated arou...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
if −M ≤ f (x) ≤ M for all f, x, then with probability at least 1 − e−t , Hence, with high probability, Theorem 26.1. If −1 ≤ f ≤ 1, then � ER ≤ R + M � 2t n . Z(x) ≤ 2R(x) + 4M � 2t . n P Z(x) ≤ 2ER(x) + 2 � � 2t n ≥ 1 − e−t . 67 Lecture 26 Comparison inequality for Rademacher processes. 18.465 If ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�(t2) ≤ EεG sup t1 + εt2 � � , t∈T t∈T i.e. enough to show that we can erase contraction for 1 coordinate while fixing all others. Since P (ε = ±1) = 1/2, we need to prove � � � G sup t1 + ϕ(t2) + G sup t∈T t∈T 1 2 1 2 � t1 − ϕ(t2) ≤ � � G sup t1 + t2 + G sup t∈T 1 2 � t∈T 1 2 � . t1 − t2 As...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�(s2) ≤ −s2. Hence � � � Σ ≤ G(t1 + t2) + G(s1 − s2) ≤ G sup t1 + t2 + G sup t1 − t2 t∈T t∈T � . � . 68 Lecture 26 Comparison inequality for Rademacher processes. 18.465 Case 3: t2 ≥ 0, s2 ≥ 0 Case 3a: s2 ≤ t2 It is enough to prove G(t1 + ϕ(t2)) + G(s1 − ϕ(s2)) ≤ G(t1 + t2) + G(s1 − s2). Note that s2 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
s1 − ϕ(s2) − G s1 − s2 = G (s1 − s2) + (s2 − ϕ(s2)) − G s1 − s2 � � ≤ G t1 + t2 − G t1 + ϕ(t2) . � � Case 3a: t2 ≤ s2 Again, it’s enough to show Σ ≤ G(s1 + s2) + G(t1 − t2) G(t1 + ϕ(t2)) − G(t1 − t2) ≤ G(s1 + s2) − G(s1 − ϕ(s2)) We have t1 − t2 ≤ t1 − ϕ(t2) ≤ s1 − ϕ(s2) since s1, s2 achieves maximum and sin...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
way as Case 3. We now apply the theorem with G(s) = (s)+ . Lemma 26.1. Proof. Note that � n � � E sup � � � t∈T i=1 � � n � � � εiϕi(ti) ≤ 2E sup � � � � � � t∈T i=1 � � � εiti � � x | | − = ( )+ + ( ) = ( )+ + ( x x x − )+ x . We apply the Contraction Inequality for Rademacher processes with G(s) = (s)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
of [0, 1] valued functions, Z = supf supf any f ∈ F unknown and to be estimated, the empirical error Z can be probabilistically bounded by R in the following way. Using the fact that Z ≤ 2R and by Martingale inequality, P Z ≤ EZ + n =1 f (xi) , and R = i �n are Rademarcher random variables. For , xn where �1, · ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≤ ≤ ≤ ���� with probability 1−e−u Eφδ(yf (x)) En n1 � φδ(yf (x)) + sup (Eφδ(yf (x)) − n f ∈F �� � Z i=1 φδ (yif (xi))) � Enφδ(yf (x)) + 2 · E sup( f ∈F � n1 � n i=1 � 2u n �iφδ(yif (xi))) + �� R � � �iyif (xi) + ≤ ���� contraction Enφδ(yf (x)) + 2 δ E sup f ∈F = ≤ Enφδ(yf (x)) + E sup ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= P sup n 1 � i=1 n 1 � h n i=1 �ih(xi) ≤ K �ih(xi) ≤ K � � 1 1 n 0 � � 1 � 1 n 0 log1/2 D(H, �, dx)d� + � �� u n 1 V log d� + � � �� u n ≥ 1 − e−u . 71 Lecture 27 Application of Martingale inequalities. Generalized Martingale inequalities. 18.465 Thus E sup n 1 � �ih(xi) ≤ K �� � � � 1 ≤...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, · · · , xn) − �� � ··· d2 (x2 , ,xn) with the assumptions that Exi di = 0, and �di�∞ ≤ ci. We will give a generalized martingale inequality below. � n di = Z − EZ where di = di(xi, i=1 · · · , xn), maxi �di�∞ ≤ C, σi 2 = σi 2(xi+1, · · · , xn) = var(di), and Edi = 0. Take � > 0, n � � di − � n P( 2 ≥ t) ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
exp(λdn) exp(λ�σ2 ) ≤ 1. Iterate over · n � n � � di − � n σi P � 2 ≥ t ≤ exp (−λ · t) i=1 i=1 72 Lecture 27 Application of Martingale inequalities. Generalized Martingale inequalities. 18.465 . Take t = u/λ, we get To minimize the sum � � � P n � � di ≥ � n i=1 i=1 2 + σi u � 2 � (1 + 2�C) ≤ ex...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(yi − h(xi))2 i=1 over Hk, where k is the number of layers. Define L(y, h(x)) = (y − h(x))2, 0 ≤ L(y, h(x)) ≤ 4. We want to bound EL(y, h(x)). From the previous lectures, � � EL(y, h(x)) − � sup � � 1 n with probability at least 1 − e−t . Define n � i=1 � � L(yi, h(xi)) ≤ 2E sup � � � � � 1 � � � n n � ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
�→ R is a contraction because largest derivative of s on 2 [−2, 2] is 4. Hence, E � � 1 � sup � � n h∈Hk (A1,...,Ak ) n � i=1 � � εi(yi − h(xi))2 � � � = EEε n � � � 1 � sup � � n h∈Hk (A1,...,Ak ) � � 1 � sup � � n h∈Hk (A1,...,Ak ) i=1 � n1 � � � sup � � n h∈Hk (A1,...,Ak ) i=1 n � εi i=1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 � � α� j n � εihj (xi) εihj (xi) �� � � � � � �� � � � � � � � 1 � sup � n � � h∈Hk (A1,...,Ak ) j � n1 � � sup � � n h∈Hk−1(A1 ,...,Ak−1) � i=1 i=1 � � � εihj (xi) � � ≤ 2LAkEε = 2LAkEε � The last equality holds because sup | λj sj = maxj sj , i.e. max is attained at one of the vertices. | |...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf