text
stringlengths
16
3.88k
source
stringlengths
60
201
� n n � i=1 εih(xi) + � � � � � 8 √ n � � EL(y, h(x)) − � sup � � h∈Hk (A1,...,Ak ) 1 n n � � � � L(yi, h(xi)) � � � t n + 8 i=1 � � 8 � + √ εih(xi) � � n n � i=1 Z (Hk(A1, . . . , Ak)) := ≤ 8 k � j=1 � � 1 E sup � · (2LAj ) � � n h∈H with probability at least 1 − e−t . Assume H is ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� t 1 n n Since −1 ≤ h ≤ 1 for all h ∈ H, n � h2(xi) �⎞ ⎠ i=1 ≥ 1 − e−t , n � h2(xi) �⎞ ⎠ . i=1 ≥ 1 − 2e−t , � � � 1 � sup � � n h∈H Pε n � i=1 � � � εih(xi) � � ≤ √ K n � 1 0 log1/2 D(H, ε, dx)dε + K � � t n ≥ 1 − 2e−t , Since H is a VC-subgraph class with V C(H) = V , log D(H, ε, dx) ≤ KV l...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
have � ∞ � n � � � 1 � Eε sup � h∈H � n i=1 � � � � ≤ K εih(xi) � 2e− nu 2 K2 du V n V n + 0 � + � � V n = K ≤ K ∞ K √ n 0 2 e−x dx � K + √ n ≤ K V n 2 for V ≥ 2. We made a change of variable so that x = K2 . Constants K change their values from line to line. 2 nu We obtain, Z (Hk(A1,...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
wj = �j if �j =� j 0 and wj = 1 if With this t, Z (Hk(�1, . . . , �k)) ≤ K k � j=1 (2L · 2−�j ) · � � V n 8 + √ n + 8 t + 2 �k j=1 n log |wj | 77 Lecture 29 Generalization bounds for neural networks. 18.465 with probability at least 1 − e−t−2 j=1 log |wj | = 1 − Pk k � 1 |wj |2 j=1 e−t . ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
j and |�j | ≤ | log Aj | + 1. Hence, |wj | ≤ | log Aj | + 1. Therefore, with probability at least 1 − e−u , ∀(A1, . . . , Ak), Z (Hk(A1, . . . , Ak)) ≤ K (4L · Aj ) · k � j=1 � V n 8 + √ n � 2 �k j=1 +8 log (| log Aj | + 1) + k log 5 + u n . Notice that log (| log Aj | + 1) is large when Aj is very lar...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 Function φ is called feature map. Family of classifiers: FH = {(w, z)H : �w�H ≤ 1}. F = {(w, φ(x))H : �w�H ≤ 1} � f : X �→ R. Algorithms: (1) SVMs f (x) = n � i=1 n � αiK(xi, x) = ( i=1 � αiφ(xi), φ(x))H � �� w Here, instead of taking any w, we only take w as a linear combination of images of data po...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≤ E sup Eϕδ (yf (x)) − f ∈F ϕδ (yif (xi)) + � � 2t n 1 n n � i=1 79 Lecture 30 Generalization bounds for kernel methods. 18.465 Using the symmetrization technique, � E sup f ∈F E(ϕδ (yf (x)) − 1) − Since δ (ϕδ − 1) is a contraction, · � n1 n i=1 � � 1 (ϕδ (yif (xi)) − 1) ≤ 2E sup � � � n f ∈F...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � 4 δn � = E εiφ(xi), εiφ(xi) = n � i=1 E 4 δn �� εiεj (φ(xi), φ(xi)) H i,j εiεj K(xi, xj ) ≤ εiεj K(xi, xj ) H � � E i,j 4 δn i=1 � � 1 E sup � � � n f ∈F 4 δ i=1 � � � sup (w, � � �w�≤1 � n � 4 δn E � � � � E 4 δn i=1 4 �� δn E i,j = = = = = n � � � � � i=1 4 δn EK...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= supx∈X |f (x) − g(x)|. The following theorem appears in Cucker & Smale: Theorem 31.1. ∀h ≥ d, where Ch is a constant. Note that for any x1, . . . , xn, � dx(f, g) = Hence, � 2d h Ch ε log N (F, ε, d) ≤ � �1/2 n1 � n i=1 (f (xi) − g(xi))2 ≤ d(f, g) = sup |f (x) − g(x)| ≤ ε. x Assume the loss function ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
} - random collection of sets. Assume that C(x1, . . . , xn) satisfies: (1) C(x1, . . . , xn) ⊆ C(x1, . . . , xn, xn+1) (2) C(π(x1, . . . , xn)) = C(x1, . . . , xn) for any permutation π. Let and �C (x1, . . . , xn) = card {C ∩ {x1, . . . , xn}; C ∈ C} G(n) = E�C(x1,...,xn)(x1, . . . , xn). 81 Lecture 31 Optimi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
∈ Cx) + n ⎛ ⎜ ⎜ · n) exp ⎜ ⎝ − � � 2 t2 √ 1 (I(x� n P n i=1(I(xi 1 n 2 i∈Cx )−I(xi∈Cx )) � ∈Cx)+I(xi∈Cx)) ⎞ ⎟ ⎟ �2 ⎟ ⎠ ≤ 4E�C(x1,...,xn,x� 1,...,x� )(x1, . . . , xn, x� n 1, . . . , xn � ) e− nt · 4 2 = 4G(2n)e− nt 4 2 83 Lecture 32 Applications of random VC inequality to voting algorithms a...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
classifiers): algorithm outputs f = � T =1 λihi. Take random approxi­ i mation g(x) = k Fix δ > 0. 1 � k j =1 Yj (x), where Y1, . . . , Yk i.i.d with P (Yj = hi) = λi, EYj (x) = f (x). P (yf (x) ≤ 0) = P (yf (x) ≤ 0, yg(x) ≤ δ) + P (yf (x) ≤ 0, yg(x) > δ) ⎛ ≤ P (yg(x) ≤ δ) + Ex,yPY ⎝y 1 k k � ⎞ Yj (x) > δ, yE...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, 1]} Fk = hj (x) : hj ∈ H ⎧ ⎨ ⎩ k k 1 � j=1 ⎫ ⎬ ⎭ Note that H(x1, . . . , xn) ⊆ H(x1, . . . , xn, xn+1) and H(π(x1, . . . , xn)) = H(x1, . . . , xn). 84 Lecture 32 Applications of random VC inequality to voting algorithms and SVM. 18.465 In the last lecture, we proved � Px,y sup C∈C where P (C) − n...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Setting the above bound to e−u and solving for t, we get � 2 n (u + k log N (2n) + log(8n + 4)) t = So, with probability at least 1 − e−u, for all C � P (C) − n 1 � n =1 I(xi ∈ C) i P (C) �2 ≤ 2 n (u + k log N (2n) + log(8n + 4)) . In particular, � � n P (yg(x) ≤ δ) I − i=1 P δ (yg(x) ≤ 1 n �2 yig(x...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) + e−kδ2/2 . i=1 Choose k such that e−kδ2 /2 1 = n , i.e. k = 2 log n δ2 b)2 . Plug (32.2) and (32.3) into (32.1) (look at (a− a ). Hence, � P (yf (x) ≤ 0) − 2 � 1 n − n n I(yif (xi) ≤ 2δ) i=1 P (yf (x) ≤ 0) − �2 2 n with probability at least 1 − e−u . Recall that for SVM, N (n) = card {±K(xi, x)} ≤ 2n....
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
/2 ≥ r 1 4 − log(2 − r) ≤ 0 ⇔ r ≤ 2 − e Case a): r ≥ e−1/2 , λ = 1 + 2 log r 1 4 . e−1/2 ≤ 2 − e 4 . 1 Let (log r)2 − log r − 2(log r)2 − log(2 − r) ≤ 0 f (r) = log(2 − r) + log r + (log r)2 . Is f (r) ≥ 0? Enough to prove f �(r) ≤ 0. Is Enough to show (rf �(r))� ≥ 0: f �(r) = − 1 2 − r + 1 r + 2 log r ·...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, x) as follows: (1) V (A, x) = {(s1, . . . , sn) : si ∈ {0, 1}, ∃y ∈ A s.t. if si = 0 then xi = yi} x = ( x1, x2, . . . , xn) = =� . . . = y = ( y1, y2, . . . , yn) s = ( 0, 1, . . . , 0) Note that it can happen that xi = yi but si = 0 . (2) U (A, x) = conv V (A, x) = (3) d(A, x) = minu∈U (A,x) |u|2 = mi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
≤ 1 P (A) 1 e 4 ≤ 1 + P (A) P (A) . Let x = (x1, . . . , xn, xn+1) = (z, xn+1). Define A(xn+1) = {(y1, . . . , yn) : (y1, . . . , yn, xn+1) ∈ A} and B = {(y1, . . . , yn) : ∃yn+1, (y1, . . . , yn, yn+1) ∈ A} One can verify that and Take 0 ≤ λ ≤ 1. Then s ∈ U (A(xn+1, z)) ⇒ (s, 0) ∈ U (A, (z, xn+1)) t ∈ U (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
,(z,xn+1))dP n(z)dP (xn+1). e Then inner integral is � X n 1 4 d(A,(z,xn+1))dP n(z) ≤ e � 4 (λd(A(xn+1),z)+(1−λ)d(B,z)+(1−λ)2)dP n(z) 1 e X n = e 4 (1−λ)2 1 � 1 d(A(xn+1),z))λ+( 1 e( 4 4 d(B,z))(1−λ)dP n(z) We now use H ¨older’s inequality: � �� f gdP ≤ �1/p �� f pdP �1/q gq dP 1 where + = 1 p 1 q...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
P n(B) 1 � ≤ P n(B) 2 − P n(A(xn+1)) � . P n(B) 90 Lecture 33 Talagrand’s convex-hull distance inequality. 18.465 Now, integrate over the last coordinate. When averaging over xn+1, we get measure of A. � 1 4 d(A,x)dP n+1(x) = e � � 1 4 d(A,(z,xn+1))dP n(z)dP (xn+1) e � 2 − P n(A(xn+1)) P n(B) � ) ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1, . . . , 0) Build conv V (A, x) = U (A, x). Finally, d(A, x) = min{|x − u|2; u ∈ conv A} Theorem 34.1. Consider a convex and Lipschitz f : Rn �→ R, |f (x) − f (y)| ≤ L|x − y|, ∀x, y ∈ Rn . Then and � f (x1, . . . , xn) ≥ M + L P √ � t ≤ 2e−t/4 � f (x1, . . . , xn) ≤ M − L P √ � t ≤ 2e−t/4 where M is m...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, x) ≤ √ t. 92 xconv Ad Lecture 34 Consequences of Talagrand’s convex-hull distance inequality. 18.465 So, f (x) ≤ f (u0) + L √ t. What is f (u0)? We know that u0 ∈ conv A, so u0 = � λiai, ai ∈ A, and � λi ≥ 0, λi = 1. Since f is convex, f (u0) = f �� � λiai ≤ � λif (ai) ≤ � λia = a. This implies f (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
n be a bounded set. Let � n � � � f (x1, . . . , xn) = sup � � h∈H i=1 � � � hixi . � � Let’s check: (1) convexity: � n � � � f (λx + (1 − λ)y) = sup � � h∈H i=1 � n � � � λ = sup � � h∈H � � � hi(λxi + (1 − λ)yi) � � n � hixi + (1 − λ) � � � hiyi � � � n � � � + (1 − λ) sup � � h∈H i=1 i=1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � hi(xi − yi) � � ≤ (by Cauchy-Schwartz) �� �� 2 hi (xi − yi)2 sup h∈H = |x − y| �� h2 i � sup h∈H � �� L=Lipschitz constant Theorem 34.2. If M is the median of f (x1, . . . , xn), and x1, . . . , xn are i.i.d with P (xi = 1) = p and P (xi = 0) = 1 − p, then and hixi ≥ M + sup h∈H �� h2 i · √ � ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
x� = (x� 1, . . . , x� n), an independent copy of x. Let V (x) = Ex� sup f ∈F (f (xi) − f (x� ))2 i n � i=1 be ”random uniform variance” (unofficial name) Theorem 35.1. � � P Z(x) ≥ EZ(x) + 2 V (x)t ≤ 4e e−t/4 � P Z(x) ≤ EZ(x) − 2 V (x)t ≤ 4e e−t/4 � � · · � Recall the Symmetrization lemma: Lemma 35.1. ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
4 , i n � � � � f ∈F i=1 ⎞ i.e. ⎛ P ⎝sup n � f ∈F i=1 n � f ∈F i=1 n � � � � f ∈F i=1 f (xi) ≥ sup f (x� ) + 2 �t sup i (f (xi) − f (x� ))2⎠ ≤ 4e−t/4 . i ⎞ 95 Lecture 35 Talagrand’s concentration inequality for empirical processes. 18.465 If we switch xi ↔ x� , nothing changes, so we can switch ra...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
f ∈F (f (x� ) + εi(f (xi) − f (x� ))) ≥ sup f ∈F i i (f (xi) − εi(f (xi) − f (x� ))) i n � i=1 � � � � +2�t sup f ∈F i=1 n ⎞ (f (xi) − f (x� ))2⎠ i � = Ex,x� Pε sup . . . ≥ sup . . . + 2 f ∈F f ∈F � . . . for fixed x, x� √ Φ1(ε) = sup f ∈F (f (x� ) + εi(f (xi) − f (x� ))) i i Φ2(ε) = sup f ∈F (f (xi)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� t ≤ Φ2 + 2L √ t. Thus, and � Φ1 ≥ Φ2 + 2L √ � t Pε ≤ 4e−t/4 � Px,x�,ε Φ1 ≥ Φ2 + 2L √ � t ≤ 4e−t/4 . 96 Lecture 35 Talagrand’s concentration inequality for empirical processes. 18.465 The ”random uniform variance” is For example, if F = {f }, then V (x) = Ex� sup (f (xi) − f (x� ))2 . i n � f ∈F i=...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
x) = � 2d(A1,A2,x)dP n(x) ≤ 1 P n(A1)P n(A2) P (d(A1, A2, x) ≥ t) ≤ 1 P n(A1)P n(A2) · 2−t We first prove the following lemma: Lemma 36.1. Let 0 ≤ g1, g2 ≤ 1, gi : X �→ [0, 1]. Then � min 2, dP (x) · � � � 1 , g1(x) 1 g2(x) � g1(x)dP (x) · g2(x)dP (x) ≤ 1 Proof. Notice that log x ≤ x − 1. So enough to sh...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 otherwise � � 2d(A1 ,A2 ,x)dP (x) = min 2, � 1 � 1 , I(x ∈ A1) I(x ∈ A2) dP (x) ≤ � 1 I(x ∈ A1)dP (x) � · I(x ∈ A2)dP (x) = 1 P (A1)P (A2) � 98 Lecture 36 Talagrand’s two-point inequality. 18.465 n → n + 1 : Let x ∈ X n+1 , A1, A2 ⊆ X n+1 . Denote x = (x1, . . . , xn, xn+1) = (z, xn+1). Define a...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(z) � 2d(B1,B2,z)dP n(z) = 2 Moreover, by induction, ≤ 2 · 1 P n(B1)P n(B2) � � I(xn+1) ≤ I(xn+1) ≤ 2d(A1(xn+1),B2,z)dP n(z) ≤ 1 P n(A1(xn+1))P n(B2) 2d(B1,A2(xn+1),z)dP n(z) ≤ 1 P n(B1)P n(A2(xn+1)) and Hence, I(xn+1) ≤ min � 2 , P n(B1)P n(B2) ⎛ 1 , P n(A1(xn+1))P n(B2) 1 P n(B1)P n(A2(xn+1)...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
+1(A2)/P n(B2) · because P n(A1(xn+1))dP (xn+1) = P n+1(A1). � = 1 P n+1(A1)P n+1(A2) � 100 Lecture 37 Talagrand’s concentration inequality for empirical processes. 18.465 Lemma 37.1. Let and a ≤ f ≤ b for all f ∈ F. Then V (x) = Ex� sup (f (xi) − f (x� ))2 i n � f ∈F i=1 � P V ≤ 4EV + (b − a)2t ≥ 1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 , xi = yi yi 2}, Then we can decompose V as follows I3 = {i ≤ n : xi =� 1 , xi =� yi 2} yi V (x) = Ex� sup (f (xi) − f (x� i))2 n � f ∈F i=1 � = Ex� sup f ∈F � i∈I1 � f ∈F i∈I1 n � (f (xi) − f (x� i))2 + � (f (xi) − f (x� i))2 + � (f (xi) − f (x� i))2 � i∈I2 i∈I3 ≤ Ex� sup (f (xi) − f (x� ))...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Talagrand’s concentration inequality for empirical processes. 18.465 Now, let Z(x) = supf ∈F | n f (xi)|. Then � EZ + 2 V (x)t Z(x) i=1 � ≤ ���� with prob. ≥1−(4e)e−t/4 √ √ √ Using inequality c + d ≤ c + d, ≤ ���� with prob. ≥1−4 2−t · � EZ + 2 (4EV + (b − a)2t)t. Z(x) ≤ EZ + 4 √ EV t + 2(b − a)t with ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
f ∈F f ∈F E sup f ∈F (f (xi) − f (x� ))2 = E sup i f ∈F n � i=1 � � n (f (xi) − f (x� ))2 − 2nVar(f ) + 2nVar(f ) i � i=1 n � � (f (xi) − f (x� ≤ E sup f ∈F i=1 (by symmetrization) i))2 − E(f (xi) − f (x� � i))2 + 2n sup Var(f ) f ∈F n � i=1 ≤ 2E sup f ∈F � ≤ 2E sup f ∈F n � i=1 εi(f (xi) − f (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
| + 2nσ2 i n � ≤ 4(b − a) · 2E sup f ∈F i=1 = 8(b − a)EZ + 2nσ2 εi|f (xi)| + 2nσ2 We have proved the following Lemma 37.2. where σ2 = supf ∈F Var(f ). EV ≤ 8(b − a)EZ + 2nσ2 , Corollary 37.1. Assume a ≤ f ≤ b for all f ∈ F. Let Z = supf ∈F | n f (xi)| and σ2 = supf ∈F Var(f ). Then i=1 � � � P Z ≤ EZ + 4...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
2nσ2 t + (b − a) t 3 � � � � � � with probability at least ≥ 1 − e−t . Here, a ≤ f ≤ b for all f ∈ F and σ2 = supf ∈F Var(f ). Now divide by n to get � � 1 � sup � � n f ∈F n � i=1 � � f (xi) − Ef � � � ≤ E sup |. . .| + f ∈F 4(b − a)E sup |. . .| + 2σ2 f ∈F � t n + (b − a) t 3n Compare this result...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � � � � � � � � � � 1 n � Ef − What if we knew that Ef0 ≤ ε and the family Fε � n f (xi) at supf ∈F � i=1 Pin down location of f .0 � � �Ef� � � � � f0(xi) � � � � Ef − � � � ≤ sup f ∈Fε n � 1 n 1 n is too conservative. −0 i=1 n � i=1 � � � f (xi) � � Pretend we know Ef0 ≤ ε, f0 ∈ Fε. Then with p...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
. t n + t 3n 104 Lecture 38 Applications of Talagrand’s concentration inequality. 18.465 Take ε = 2−k , k = 0, 1, 2, . . .. Change t → t + 2 log(k + 2). Then, for a fixed k, with probability at least 1 − e−t 1 (k+2)2 , � � Ef0 − � � � 1 n n � i=1 � � � � � � f0(xi) ≤ ϕ(ε) + (4ϕ(ε) + 2ε) t + 2 log(k ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
+ 2 log(log2 E f0 + 2) n + 1 t + 2 log(log2 E f0 + 2) 3n = Φ(Ef0) Hence, Ef0 ≤ 1 n � n f0(xi) + Φ(Ef0). Denote x = Ef0. Then x ≤ i=1 f ¯ + Φ(x). Theorem 38.1. Let 0 ≤ f ≤ 1 for all f ∈ F. Define Fε = {f ∈ F, Ef ≤ ε} and ϕ(ε) = E supf ∈Fε Then, with probability at least 1 − e−t, for any f0 ∈ F, Ef0 ≤ x∗, where...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
V (A, x) = {(I(x1 =� y1), . . . , I(xn =� yn)) : y = (y1, . . . , yn) ∈ A}, and U (A, x) = conv V (A, x) d(A, x) = min{|s 2 = | n � 2 , s ∈ U (A, x)} si i=1 In the previous lectures, we proved Theorem 39.1. Today, we prove P (d(A, x) ≥ t) ≤ P (A) e−t/4 . 1 Theorem 39.2. The following are equivalent: (1) d...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� αiI(xi =� yi) ≤ n � � � � � α2 i · t i=1 i=1 Note that � 0 is constant on L because s0 is perpendicular to the face. αis i � �� � αisi 0 ≤ αiI(xi =� yi) ≤ Hence, � 0)2 (s i ≤ �� (s0)2t and i �� (s0)2 ≤ i √ t. Therefore, d(A, x) ≤ 0)2 i ≤ t. 2t αi � (s We now turn to an application of the...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
B − 1 ≤ � Theorem 39.3. � P B(x1, . . . , xn) ≤ M + 2 i t + 1 ≥ 1 − 2e−t/4 . x2 · �� � Proof. Let A = {y : B(y1, . . . , yn) ≤ M }, where P (B ≥ M ) ≥ 1/2, P (B ≤ M ) ≥ 1/2. We proved that P (d(A, x) ≥ t) ≤ P (A) e−t/4 . 1 Take x such that d(A, x) ≤ t. Take α = (x1, . . . , xn). Since d(A, x) ≤ t, there exi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
−t . Hence, B(x1, . . . , xn) � M + 2 nEx2 1 � t· � 108 Lecture 40 Entropy tensorization inequality. Tensorization of Laplace transform. 18.465 In this lecture, we expose the technique of deriving concentration inequalities with the entropy tensorization inequality. The entropy tensorization inequality enable...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
we define x pointsizely by ∂ ∂x � get x = udP > 0. � (u (log u − log x) − (u − x)) dP = 0, and · �� (ug) dP − λ For the second formulation, the Laplacian corresponding to sup � is L(g, λ) = exp(g)dP − 1 . It is linear in λ and concave in g, thus supg inf λ≥0 L = inf λ≥0 supg L. Define � λ dP − udP + λ. We s...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
( n EntPi (u)) dPn . i=1 Proof. Proof by induction. When n = 1, the above inequality is trivially true. Suppose � u log udPn ≤ � � udPn log udPn + � n � EntPi (u)dPn . i=1 109 Lecture 40 Entropy tensorization inequality. Tensorization of Laplace transform. 18.465 Integrate with regard to Pn+1, � � ≤ u ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
exity of entropy udPn+1 log udPn+1 + EntPn+1 (u)dPn+1 + EntPi (u)dPn+1 � � · � � � � � � n+1 � udPn+1 · log udPn+1 + EntPi (u)dPn+1 . = ���� = ���� ≤ ���� � � = ≤ � n � i=1 � n � i=1 i=1 By definition of entropy, EntPn+1 (u) ≤ � � n+1 EntPi (u)dPn+1 . i=1 � The tensorization of entropy lemma can ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(x1, , xi−1, x� , xn), � n =1 φ(−λ(Z − Z i)). We will use the ten­ i sorization of entropy technique to prove the following Hoeffding-type inequality. This theorem is Theorem 9 of i, xi+1, · · · · · · n Pascal Massart. About the constants in Talagrand’s concentration inequalities for empiri­ cal processes. The An...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
and 2 2 Ee λZ λZ − Ee λZ log Ee λZ ≤ Ee λZ � � � φ −λ(Z − Z i) i ≤ ≤ Center Z, and we get 1 Ee λZ � 2 i λ2(Z − Z i)2 1 2 Lλ2Ee λZ . Ee λ(Z−EZ)λ(Z − EZ) − log Ee λ(Z−EZ) ≤ 1 Lλ2Ee λ(Z−EZ). 2 112 � Lecture 41 Application of the entropy tensorization technique. 18.465 Let F (λ) = Eeλ(Z−EZ). It follo...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
get P(Z ≥ EZ+ Lt/2) ≤ e−t . As a result, The above inequality improves the constant of Hoeffding’s inequality. � The following Bennett type concentration inequality is Theorem 10 of Pascal Massart. About the constants in Talagrand’s concentration inequalities for empiri­ cal processes. The Annals of Probability, 200...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
property Ψ ≤ Ψ0: · 1 eλ −1 λ (e −1)(1−e−λ) � � e λ − 1 � 1 − e−λ Substract 41.2 from 41.1, and let Ψ1=Ψ−Ψ0 (1 − e−λ)Ψ� 1 − Ψ1 =1−e−λ ,and (e −1)(1−e−λ) λ � � � 1 eλ − 1 Ψ1 (eλ eλ −1)2 =1 � eλ � − (eλ − 1)2 Ψ1 �� ” � e −1 λ Ψ1 λ � “ ≤ 0 ≤ 0 Ψ1(λ) eλ − 1 ≤ Ψ1(λ) lim λ→0 eλ − 1 = 0. It follows...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
+ � fi, Efi = 0, supf ∈F var(f ) = supf ∈F � 3 ≤ e−x . (1 + u)EZ + nσ2x + x � � def. n f 2 = σ2 i=1 i , ∀i ∈ {1, · · · , n}, fi ≤ Proof. Let Z = Zk = n � � def. ◦ fi fi = n i=1 sup f ∈F sup f ∈F i=1 � i=k fi � = fk such that Zk = sup Zk � f ∈F i=� k fi. It follows that Zk � ≤ Z − Zk ≤ u. Let ψ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Thus ≤ ασ2Ee λZ . � � E λZeλZ − Ee λZ log Ee λZ ≤ Ee λZ � ψ(λ(Z − Zk)) k Let Z0 = Z − E, and center Z, we get ≤ f (λ)EZeλZ + f (λ)αnσ2Ee λZ . � � E λZ0e λZ0 − Ee λZ0 log Ee λZ0 ≤ f (λ)EZ0e λZ0 + f (λ) αnσ2 + EZ Ee λZ0 . � � Let F (λ) = EeλZ0 , and Ψ(λ) = log F (λ), we get (λ − f (λ)) F �(λ) − F (λ) log ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, X �) be an exchangeable pair on X (i.e., dP(X, X �) = dP(X �, X). Let F (x, x�) = −F (x, x�) be antisymmetric, f (x) = E(F (x, x�)|x). Then Ef (x) = 0. If further then P(f (x) ≥ t) ≤ exp − 2 t 2(C+Bt) � � 1 2 � . Δ(x) = E |(f (x) − f (x�)) F (x, x�)| x ≤ Bf (x) + C, � � � Proof. by definition of f (X) E (...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� X � � � � ⎞ � ⎟ ⎟ ⎟ ⎟ ⎠ � = = 117 Lecture 42 Stein’s method for concentration inequalities. 18.465 Since m(λ) is a convex function in λ, and m�(0) = 0, m�(λ) always has the same sign as λ. In the interval 0 ≤ λ < 1/B, the above inequality can be expressed as m�(λ) ≤ λ · (B · m�(λ) + C · m(λ)) (log m(λ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
| probabilistically. To apply the above theorem, take Xi · · · , n} be a random variable uniformly distributed an independent copy of Xi for i = 1, , n, I ∼ unif{1, Let X = · · · � n i i over 1, · · · , n, and X � = i=I Xi + XI . Define F (X, X �), f (X), and Δ(X) as the following, � F (X, X �) = n (X − X �...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � Xi − E Xi ≥ t ≤ exp − � � � �� � � � � � P − Xi − E − Xi ≥ t ≤ exp − � � � + σ2 i t2 2 i ci � t2 2 2 + σi i ci ��� � � � P � Xi − E Xi� ≥ t ≤ 2 exp − � union bound � � � � t2 2 i c2 i + σi . 118 � Lecture 42 Stein’s method for concentration inequalities. 18.465 Example 42.3. Let (a...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
ai,π(i) − ai,π�(i) � i=1 i=1 n 1 � n 2 · J aJ,π(J) − n 1 � n2 2 · I,J aI,J − n 1 � n2 2 · I,J aI,J � aI,π(I) + aJ,π(J) − aI,π(J) − aJ,π(I)|π = f (X) def. = E (F (X, X �)|X) n � 2 n 1 � 2 n � = · I = ai,π(i) − aI,π(I) + 1 � n ai,j Δ(X) def. = · i,j = i X − EX � � n 1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
⎝ β � n i<j σiσj + β h · ⎞ σi⎠ . n � i=1 1 We are interested in the concentration of m(σ) = n e x−e−x ex+e−x to the conditional distribution of σi on {σj : j = i} (Gibbs sampling): . Given any σ, we can pick I uniformly and independently from {1, i σi around tanh (β m(σ) + β h) where tanh(x) = , n}, and ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 6.080/6.089 GITCS Feb 07, 2008 Lecturer: Scott Aaronson Scribe: Mergen Nachin Lecture 2 Administrative...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
about Euclid’s GCD algorithm, which was one of the first non-trivial algorithms known to humankind. Here is some digression. Area of a circle is A = πr2 . It’s obvious that the area of a circle should go like the r2; the question is why the constant of proportionality (π) should be the same one that relates circumfe...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
men are mortal, Socrates is man, therefore Socrates is a mortal. This is a syllogism. In more modern language, we call it transitivity of implications. In general, a syllogism is If A ⇒ B is valid and B ⇒ C is valid, then A ⇒ C is valid. Remark: What do we mean by ”⇒”? ”A ⇒ B” is valid if A is false or B is true or...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
rule ”If you are under 21 the you are not drinking”. Who do you have to check to test this rule: someone who is drinking, someone who isn’t drinking, someone who’s over 21, someone who’s under 21? And then, of course, almost everyone gets it right. Even though this problem is logically equivalent in every way to th...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
As we all see in our everyday life, everyone draws inferences all the time. However, what he was (as far as I know) was the first person in the historical record to (as it were) draw a box around the inference rule, to say that this is a general law of thought.This is crucial because it allows us to reason about the...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
what today we would see as the right picture of what such a machine would be. To him, it’s not that you would take some lifeless clump of clay, and utter some mystical incantation that would magically imbue it the power of speech – like in the legend of Pinocchio, or the legend of the Golem, or even in a lot of scie...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
can’t be analyzed into thousands of baby steps? Maybe they can! Many things seem magical until you know the mechanism. So why not logical reasoning itself? To me, that’s really the motivation for studying logic: to discover ”The Laws of Thought.” But to go further, we need to roll up our sleeves, and talk about som...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
all four sentences? No. By applying the rules, we can reach a contradiction! You agree that if we reach a logical contradiction by applying the rules, then the sentences can’t all be valid? Suppose a set of sentences is inconsistent (i.e., there’s no way of setting the variables so that all of them are satisfied). C...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
that’s true, you can get by cranking the rules. In the next lecture: a bit of first-order logic! 2-4
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
Introduction to Simulation - Lecture 10 Modified Newton Methods Jacob White Thanks to Deepak Ramaswamy, Jaime Peraire, Michal Rewienski, and Karen Veroy Outline • Damped Newton Schemes – Globally Convergent if Jacobian is Nonsingular – Difficulty with Singular Jacobians • Introduce Continuation Schemes – Problem with...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
converge regardless of initial guess! SMA-HPC ©2003 MIT Non-converging Case f(x) 1-D Picture 1x X 0x Limiting the changes in X might improve convergence SMA-HPC ©2003 MIT Newton Method with Limiting Newton Algorithm Newton Algorithm for Solving ( ) F x = 0 0 x = Initial Guess, k = 0 Repeat { Compute k ( F x )...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
) b ) J − F J F Then 1 k x ( ( ) x ) − ≤ β ( Inverse is bounded ) J F ( y ) ≤ (cid:65) x − y ( Derivative is Lipschitz Cont ) There exists a set of ( ) F x ( F x = 1 + k k sα ∈ ' ( k k x + ∆ α 0,1 such that ( F x γ < 1 + k ] ) k ) with <1 γ Every Step reduces F-- Global Convergence! SMA-HPC ©2003 MIT Newton Method w...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
2 0 v − ) + 10 − 16 ( e 0.025 − = 1) 0 Newton Method with Limiting Damped Newton Example cont. ( f v 2 ) = ) 1 ( − v 2 10 ( 2 0 v − ) + 10 − 16 ( e 0.025 − = 1) 0 Newton Method with Limiting 0 x Repeat { Initial Guess, = k = Damped Newton Nested Iteration 0 Compute Solve FJ Find k 1 + k α k = x x k k k k F , ...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
F ( k x ) + J F ( ) k k x α ⎡ ⎢ ⎣ 1 − J F ( k x ) k ( F x ) ≤⎤ ⎥ ⎦ (cid:65) 2 k α J F 1 − ( k x ) F ( k x ) 2 SMA-HPC ©2003 MIT Newton Method with Limiting Damped Newton Theorem Proof-Cont From the previous slide k ( F x ) 1 + − F ( k x ) + J F ( ) k k x α ⎡ ⎢ ⎣ 1 − J F ( k x ) k ( F x ) ≤⎤ ⎥ ⎦ (cid:65) 2 k α J...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
F x 1 ⎢ ⎣ ( k + α α ( F x 2 β 2 ≤ − ) ) (cid:65) 1 + 2 k k ) ⎤ ⎥ ⎦ k ( F x ) Two Cases: (cid:65) 1) 2 β 2 ( kF x ) < 1 2 Pick k α = 1 (Standard Newton) ⎛ ⇒ − ⎜ ⎝ 1 k ( k + α α ) 2 (cid:65) 2 β 2 ( kF x ) < ⎞ ⎟ ⎠ 1 2 2) (cid:65) 2 β 2 k ( F x ) > 1 2 Pick k α = 1 2 β (cid:65) k ( F x ) ⎛ ⇒ − ⎜ ⎝ 1 k ( k α α + ) 2 (c...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
β (cid:65) 0 ( F x ) ≤ 0 γ First – Show that the iterates do not increase Second – Use the non-increasing fact to prove convergence SMA-HPC ©2003 MIT Newton Method with Limiting Initial Guess, k = 0 = x Repeat { Damped Newton Nested Iteration 0 Compute Solve FJ Find 1 k + k α k = x x k k k k F , 1 + x x J )...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
solve 0 where: Starts the continuation ( ) F x Ends the continuation a) b) = ( (cid:4) F x ( (cid:4) F x ) ( ) 0 , 0 ) ( ) 1 ,1 = c) ( ) x λ is sufficiently smooth Hard to insure! ( ) x λ 0 SMA-HPC ©2003 MIT Dissallowed λ 1 Continuation Schemes Basic Concepts Template Algorithm x ( ) 0 ) ( λ = prev x ( (cid:4) F x ...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
:4) F x ) ( λ λ λ = ) , ) ( ( ) F x λ Observations ( 1 + − ) ( ) x λ λ = λ = ) ( (cid:4) ( ) 0 , 0 0 F x ( (cid:4) ( ) F x 0 , 0 ∂ x ∂ x ) ( ) 0 = 0 = I Problem is easy to solve and Jacobian definitely nonsingular. λ =1 (cid:4) ( F x ( (cid:4) F x ∂ = ) ) ( ) 1 ,1 ( ) 0 , 0 x ∂ SMA-HPC ©2003 MIT ( F x = ) ( ) 1 ( F x...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
λ δλ λ δλ + + ( ( ) , ≈ Update Improvement (cid:4) 0 ) ( , F x λ λ + ( ( (cid:4) F x ∂ ) ) ( ) , λ λ x ∂ ) (cid:4) ( F x λ λ ∂ ∂ λ ( ) , δλ ( x ( ) λ δλ + − x ( ) λ ) + ⇒ ( (cid:4) F x ∂ ) ( ) , λ λ x ∂ ( 0 x ( λ + ) δλ − x ( ) λ ) = − (cid:4) F ∂ ( x ) ( ) , λ λ ∂ λ δλ Have From last step’s Newton Better Guess for n...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
uation Schemes ( ) x λ Jacobian Altering Scheme Still can have problems Must switch back to increasing lambda Arc-length steps 0 lambda steps Must switch from increasing to decreasing lambda λ 1 SMA-HPC ©2003 MIT Continuation Schemes ( ) x λ Jacobian Altering Scheme Arc-length Steps? Arc-length steps 0 arc-length...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
2003 MIT Continuation Schemes )x λ ( 0 Jacobian Altering Scheme Arc-length Turning point What happens here? λ 1 Upper left-hand Block is singular SMA-HPC ©2003 MIT ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ ( (cid:4) F x ∂ k k λ , ) ( (cid:4) F x ∂ k k λ , ) x ∂ ( λ x prev ) T ) 2 2 ( k x − ∂ λ k ( − λ λ prev ) ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ Summary • Damped Newton ...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
6.867 Machine learning, lecture 8 (Jaakkola) 1 Lecture topics: • Support vector machine and kernels • Kernel optimization, selection Support vector machine revisited Our task here is to first turn the support vector machine into its dual form where the exam­ ples only appear in inner products. To this end, assume ...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
��rst that J(θ, θ0) really is equivalent to the original problem. Suppose we set θ and θ0 such that at least one of the constraints, say the one corresponding to (xi, yi), is violated. In that case � � − αi yi(θT φ(xi) + θ0) − 1 > 0 (4) for any αi > 0. We can then set αi = ∞ to obtain J(θ, θ0) = ∞. You can think...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
the right hand side by first obtaining θ and θ0 as a function of the Lagrange multipliers (and the data). To this end d dθ0 d dθ J(θ, θ0; α) = − αtyt = 0 J(θ, θ0; α) = θ − αtytφ(xt) = 0 n � t=1 n � t=1 (7) (8) So, again the solution for θ is in the span of the feature vectors corresponding to the training...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].(cid:13)(cid:10) 6.867 Machine learning, lecture 8 (Jaakkola) 3 the input vectors does not appear explicitly as part of the optimization problem. It is formulated solely on the basis of the Gram matrix: ⎡ K = ⎣ φ(x1)T φ(x1) · · · φ(xn)T φ(...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
. Moreover, the identity of the support vectors will depend on the feature mapping or the kernel function. But what is θˆ solving for ˆαt by looking at the support vectors. Indeed, for all i ∈ SV we should have 0? It appeared to drop out of the optimization problem. We can set θ0 after yi(θˆT φ(xi) + θˆ 0) = yi � ...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
the best kernel function. Unfortunately this won’t work without some care. For example, if we multiply all the feature vectors by 2, then the resulting geometric margin will also be twice as large (we just expanded the space; the relations between the points remain the same). It is necessary to perform some normali...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
yiyj [φ(xi)T φ(xj )], t=1 i=1 j=1 subject to 0 ≤ αt ≤ C, n � αtyt = 0 (21) (22) t=1 The resulting discriminant function has the same form except that the ˆαt values can be different. What about θˆ 0 we need to identify classification constraints that are satisfied with equality. These are no longer simply the o...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
flexible as finding the best convex combination of basic (fixed) kernels. Key to such an approach is the measure we would optimize. Ideally, this measure would be the generalization error but we obviously have to settle for a surrogate measure. The surrogate measure could be cross-validation or an alternative criterio...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
MIT OpenCourseWare http://ocw.mit.edu 18.969 Topics in Geometry: Mirror Symmetry Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MIRROR SYMMETRY: LECTURE 9 DENIS AUROUX 1. The Quintic (contd.) To recall where we were, we had (1) Xψ = {(x0 : · · ·...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
)5n (n!) T0 In terms of z = (5ψ)−5, the period is proportional to (5) φ0(z) = ∞ (5n)! n � z (n!)5 n=0 � � cnzn) = ncnzn, we obtained the Picard-Fuchs equation d Setting Θ = z dz : Θ( (6) θ4φ0 = 5z(5Θ + 1)(5Θ + 2)(5Θ + 3)(5Θ + 4)φ0 Proposition 1. All periods Ωˇ ψ satisfy this equation. � Note that all ...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
it has a residue C). on X which is ideally a 3-form on X, but is at least a class in H 3(X, � 1 πi S1 φ(z)dz. Recall from complex analysis, if φ(z) has a pole at 0, res0(φ) = 2 Now, let’s say that we have a 3-cycle C in X: we can associate a “tube” 4-cycle in P4 which is the preimage of C in the boundary of a tub...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
0 ∧ · · · ∧ � dxi ∧ · · · ∧ � dxj ∧ · · · ∧ dx4 with deg (g0 · · · g4) = 5� − 4, then (13) dφ = � � 1 � ∂fψ � ∂gj gj ∂xj f �+1 ∂xj ψ − fψ j j � Ω MIRROR SYMMETRY: LECTURE 9 3 In particular, if we have something of the form ( � gj ∂fψ ∂xj ) Ω +1 (the Jacobian ideal f � ψ ∂fψ is the span of { ∂xi })...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
⎞ ⎟ ⎟ ⎟ ⎟ ⎠ , w(z) = ⎛ ⎜ ⎜ ⎝ f (z) Θf (z) . . . Θs−1f (z) ⎞ ⎟ ⎟ ⎠ 1 −Bs−1(z) . . . 0 · · · The fundamental theorem of these differential equations states that there exists a constant s × s matrix R and an s × s matrix of holomorphic functions S(z) s.t. (16) Φ(z) = S(z) exp((log z)R) = S(z)(id + (log z)R + log2 z 2...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
0 0 0 ⎞ ⎟ ⎟ ⎠ 4 DENIS AUROUX is nilpotent, and our assumption holds. The corresponding monodromy is (19) T = e ⎛ ⎜ 2πiR ⎜ 0 = ⎝ 0 0 1 2πi 1 0 0 (2πi)2 2 2πi 1 0 ⎞ (2πi)3 6 (2πi)2 ⎟ ⎟ 2 2πi ⎠ 1 � If ω(z) = β Ωˇ ψ is a period, then it is a solution of the Picard-Fuchs equation, and thus a linear combi...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
) exp(R log z), φ1(z) = φ0(z) log z + φ˜(z), with φ˜(z) holomorphic. Now n=0 (n!)5 � (20) Θj (f (z) log z) = (Θj f ) log z + j(Θj−1f ) If we write F (x) = x4 − 5z � 4 =1(5x + j), then j (21) Dφ1(z) = F (Θ)(φ0(z) log z + φ˜(z)) = (F (Θ)φ0) log z + F �(Θ)φ0 + F (Θ)φ˜ Since 0 = Dφ0 = Dφ1, we find Dφ˜(z) = −F �(Θ)φ0...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
Substitution of Power Series We can find the power series of e−t by starting with the power series for ex and making the substitution x = −t2 . 2 e x = 1 + x + 2 x 2! + e−t2 = 1 + (−t2) + + · · · (R = ∞) + (−t2)3 3! + · · · 3 x 3! (−t2)2 2! t6 3! = 1 − t2 + t4 2! − + · · · The signs of the terms a...
https://ocw.mit.edu/courses/18-01sc-single-variable-calculus-fall-2010/33f76c6f2abf4190beb0a0cf29065926_MIT18_01SCF10_Ses100d.pdf
L2: Combinational Logic Design L2: Combinational Logic Design (Construction and Boolean Algebra) (Construction and Boolean Algebra) Acknowledgements: Materials in this lecture are courtesy of the following sources and are used with permission. Prof. Randy Katz (Unified Microelectronics Corporation Distinguished Prof...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
n+ p inversion layer� channel gate oxide p+ n D ID -4 x 10 6 5 4 ) A ( 3 I D 2 1 0 0 VGS= 2.5 V Resistive Saturation VGS= 2.0 V VGS= 1.5 V VGS= 1.0 V 0.5 1 1.5 2 2.5 VDS(V) VT = 0.5V (cid:190) MOS is a very non-linear. (cid:190) Switch-resistor model sufficient for first order analysis. G + VGS - S L2: 6.111 S...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
Vin = 2.5 Vin = 1 Vin = 0.5 Vin = 0 Vout CMOS gates have: (cid:131) Rail-to-rail swing (0V to VDD) (cid:131) Large noise margins (cid:131) “zero” static power dissipation 2.5 2 1.5 ) V ( t u o V 1 0.5 0 0 0.5 1 1.5 2 2.5 V (V) in L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 7 Possible Function of Two...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
X 0 0 1 1 X 0 0 1 1 X 0 0 1 1 Y 0 1 0 1 Y 0 1 0 1 Y 0 1 0 1 Y 0 1 0 1 Z 1 1 1 0 Z 0 0 0 1 Z 1 0 0 0 Z 0 1 1 1 Z = X • Y Z = X • Y Z = X + Y Z = X + Y L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 9 Exclusive (N)OR Gate Exclusive (N)OR Gate XOR (X ⊕ Y) XNOR (X ⊕ Y) X Y X Y Z Z X 0 0 1 1 X 0 0 1 1 Y 0 1 ...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf