text
stringlengths
30
4k
source
stringlengths
60
201
− f (x� i))2 . n � i=1 Use the Symmetrization Lemma with ξ1 = Z(x), ξ2 = Z(x�), and ξ3 = sup f ∈F n � i=1 (f (xi) − f (x� ))2 . i It is enough to prove that ⎛ P ⎝Z(x) ≥ Z(x�) + 2 �t sup (f (xi) − f (x� ))2⎠ ≤ 4e−t/4 , i n � � � � f ∈F i=1 ⎞ i.e. ⎛ P ⎝sup n � f ∈F i=1 n � f ∈F i=1 n � � � � f ∈F i=1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= 0) = P (εi = 1) = 1/2. ⎛ Px,x� ⎝sup f ∈F n � i=1 f (xi) ≥ sup f ∈F n � i=1 f (xi � � � � � ) + 2 �t sup f ∈F n i=1 ⎞ (f (xi) − f (x� ))2⎠ i = Px,x�,ε Define and � n � i=1 sup f ∈F (f (x� ) + εi(f (xi) − f (x� ))) ≥ sup f ∈F i i (f (xi) − εi(f (xi) − f (x� ))) i n � i=1 � � � � +2�t sup f ∈...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Moreover, M edian(Φ1) = i=1 i and � Φ1 ≤ M (Φ1) + L √ � t Pε ≥ 1 − 2e−t/4 � Φ2 ≤ M (Φ2) − L √ � t Pε ≥ 1 − 2e−t/4 . With probability at least 1 − 4e−t/4 both above inequalities hold: Φ1 ≤ M (Φ1) + L √ t = M (Φ2) + L √ t ≤ Φ2 + 2L √ t. Thus, and � Φ1 ≥ Φ2 + 2L √ � t Pε ≤ 4e−t/4 � Px,x�,ε Φ1 ≥ Φ2 + ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
iance � � � � 97 Lecture 36 Talagrand’s two-point inequality. 18.465 Let x ∈ X n . Suppose A1, A2 ⊆ X n . We want to define d(A1, A2, x). Definition 36.1. d(A1, A2, x) = inf{card {i ≤ n : xi =� 1 and xi =� yi 2}, y 1 ∈ A1, y 2 ∈ A2} yi Theorem 36.1. and E2d(A1 ,A2 ,x) = � 2d(A1,A2,x)dP n(x) ≤ 1 P n(A1)P...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
3 � min 2, � 1 1 , g1 g2 + g1 + g2 ≤ 3. If min is equal to 2, then g1, g2 ≤ 1 and the sum is less than 3. If min is equal to 1 , then g1 ≥ 1 and g1 ≥ g2, so min +g1 + g2 ≤ 1 + 2g1 ≤ 3. 2 g1 2 g1 We now prove the Theorem: Proof. Proof by induction on n. n = 1 : d(A1, A2, x) = 0 if x ∈ A1 ∪ A2 and d(A1, A2, ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
= A2(xn+1) xn+1 xn+1 d(A1, A2, x) = d(A1, A2, (z, xn+1)) ≤ 1 + d(B1, B2, z), d(A1, A2, (z, xn+1)) ≤ d(A1(xn+1), B2, z), d(A1, A2, (z, xn+1)) ≤ d(B1, A2(xn+1), z). � 2d(A1,A2,x)dP n+1(z, xn+1) = � � � 2d(A1,A2,(z,xn+1 ))dP n(z) dP (xn+1) �� I(xn+1) � The inner integral ca ne bounded by induction as follow...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
n(A1(xn+1)/P n(B1) P n(A2(xn+1)/P n(B2) ⎠ � � � � 1 1 �� 1/g2(xn+1) �� 1/g1(xn+1) 99 Lecture 36 Talagrand’s two-point inequality. 18.465 So, � I(xn+1)dP (xn+1) ≤ ≤ = 1 P n(B1)P n(B2) 1 P n(B1)P n(B2) 1 P n(B1)P n(B2) � � min 2, 1 , g1 1 g2 � dP · � 1 g1dP � · g2dP · 1 P n+1(A1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Let A = {y ∈ X n, V (y) ≤ M } ⊆ X n . Hence, A consists of points with typical behavior. We will use control by 2 points to show that any other point is close to these two points. By control by 2 points, P (d(A, A, x) ≥ t) ≤ P (A) P (A) 1 · 2−t ≤ 4 · 2−t Take any x ∈ X n . With probability at least 1 − 4 2−t , d(A, A...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
(x� ))2 + Ex� sup i (f (xi) − f (x� ))2 i � f ∈F i∈I3 ≤ Ex� sup f ∈F (f (yi 1) − f (xi � ))2 + Ex� sup f ∈F i=1 (f (yi 2) − f (x� i))2 + (b − a)2t � f ∈F i∈I2 n � i=1 = V (y 1) + V (y 2) + (b − a)2t ≤ M + M + (b − a)2t because y1, y2 ∈ A. Hence, Finally, M ≤ 2EV because � P V (x) ≤ 2M + (b − a)2t ≥ 1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1. Assume a ≤ f ≤ b for all f ∈ F. Let Z = supf ∈F | n f (xi)| and V = supf ∈F f (x� i=1 i))2 . Then � � Z ≤ EZ + 4 P √ � EV t + 2(b − a)t ≥ 1 − (4e)e−t/4 − 4 2−t . · � n (f (xi)− i=1 This is an analog of Bernstein’s inequality: √ 4 EV t −→ Gaussian behavior 2(b − a)t −→ Poisson behavior Now, consider the fo...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
sup f ∈F n � i=1 εi(f (xi) − f (x� i))2 + 2nσ2 � εi(f (xi) − f (xi � ))2 + 2nσ2 + 102 Lecture 37 Talagrand’s concentration inequality for empirical processes. 18.465 Note that the square function [−(b−a), (b−a)] �→ R is a contraction. Its largest derivative on [−(b−a), (b−a)] is at most 2(b − a). Note that...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
| n f (xi)| and σ2 = supf ∈F Var(f ). Then i=1 � � � P Z ≤ EZ + 4 (8(b − a)EZ + 2nσ2)t + 2(b − a)t ≥ 1 − (4e)e−t/4 − 4 2−t . · � Using other approaches, one can get better constants: � � P Z ≤ EZ + (4(b − a)EZ + 2nσ2)t + (b − a) � t 3 ≥ 1 − e−t . 103 Lecture 38 Applications of Talagrand’s concentration ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
E sup |. . .| + f ∈F 4(b − a)E sup |. . .| + 2σ2 f ∈F � t n + (b − a) t 3n Compare this result to the Martingale-difference method (McDiarmid): � � n1 � � sup � � f ∈F n i=1 f (xi) − Ef ≤ E sup |. . .| + f ∈F � 2(b − a)2t n The term 2(b − a)2 is worse than 4(b − a)E supf ∈F |. . .| + 2σ2 . An algorit...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Ef − � � � ≤ sup f ∈Fε n � 1 n 1 n is too conservative. −0 i=1 n � i=1 � � � f (xi) � � Pretend we know Ef0 ≤ ε, f0 ∈ Fε. Then with probability at least 1 − e−t , ≤ E sup Ef − f ∈Fε � � � � � 1 n n � i=1 � � � + f (xi) � � � � � � � 4E sup . . . + 2σε 2 | | f ∈Fε � t n + t 3n where σε 2 = su...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
Then, for a fixed k, with probability at least 1 − e−t 1 (k+2)2 , � � Ef0 − � � � 1 n n � i=1 � � � � � � f0(xi) ≤ ϕ(ε) + (4ϕ(ε) + 2ε) t + 2 log(k + 2) n + t + 2 log(k + 2) 3n For all k ≥ 0, the statement holds with probability at least ∞ � 1 − k=1 � e−t ≥ 1 − e−t 1 (k + 2)2 � �� π2 6 −1 For f0, fi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
n � n f0(xi) + Φ(Ef0). Denote x = Ef0. Then x ≤ i=1 f ¯ + Φ(x). Theorem 38.1. Let 0 ≤ f ≤ 1 for all f ∈ F. Define Fε = {f ∈ F, Ef ≤ ε} and ϕ(ε) = E supf ∈Fε Then, with probability at least 1 − e−t, for any f0 ∈ F, Ef0 ≤ x∗, where x∗ is the largest solution of n � � � Ef − n 1 � n i=1 � f (xi) . � x∗ = 1 n i=1 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, yn) ∈ A}, and U (A, x) = conv V (A, x) d(A, x) = min{|s 2 = | n � 2 , s ∈ U (A, x)} si i=1 In the previous lectures, we proved Theorem 39.1. Today, we prove P (d(A, x) ≥ t) ≤ P (A) e−t/4 . 1 Theorem 39.2. The following are equivalent: (1) d(A, x) ≤ t (2) ∀α = (α1, . . . , αn), ∃y ∈ A, s.t. � n i=1 αi...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� � � � α2 i · t i=1 i=1 Note that � 0 is constant on L because s0 is perpendicular to the face. αis i � �� � αisi 0 ≤ αiI(xi =� yi) ≤ Hence, � 0)2 (s i ≤ �� (s0)2t and i �� (s0)2 ≤ i √ t. Therefore, d(A, x) ≤ 0)2 i ≤ t. 2t αi � (s We now turn to an application of the above results: Bin Pack...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
one. Hence, B − 1 ≤ � Theorem 39.3. � P B(x1, . . . , xn) ≤ M + 2 i t + 1 ≥ 1 − 2e−t/4 . x2 · �� � Proof. Let A = {y : B(y1, . . . , yn) ≤ M }, where P (B ≥ M ) ≥ 1/2, P (B ≤ M ) ≥ 1/2. We proved that P (d(A, x) ≥ t) ≤ P (A) e−t/4 . 1 Take x such that d(A, x) ≤ t. Take α = (x1, . . . , xn). Since d(A, x) ≤ ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� ≤ nEx + nEx · t + t 3 2 1 2 1 ≥ 1 − e−t . Hence, B(x1, . . . , xn) � M + 2 nEx2 1 � t· � 108 Lecture 40 Entropy tensorization inequality. Tensorization of Laplace transform. 18.465 In this lecture, we expose the technique of deriving concentration inequalities with the entropy tensorization inequality. The...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
− (u − x)) dP : x ∈ R+ � �� = sup (u · g) dP : � exp(g)dP ≤ 1 � . Proof. For the first formulation, we define x pointsizely by ∂ ∂x � get x = udP > 0. � (u (log u − log x) − (u − x)) dP = 0, and · �� (ug) dP − λ For the second formulation, the Laplacian corresponding to sup � is L(g, λ) = exp(g)dP − ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
.2. [Tensorization of entropy] X = (X1, · · · , Xn), Pn = P1 × · · · × Pn, u = u(x1, · · · , xn), EntPn (u) ≤ � � ( n EntPi (u)) dPn . i=1 Proof. Proof by induction. When n = 1, the above inequality is trivially true. Suppose � u log udPn ≤ � � udPn log udPn + � n � EntPi (u)dPn . i=1 109 Lecture 40 Entrop...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
theorem � � � � � udPn+1 · log udPn+1 + EntPn+1 (u)dPn + EntPi (u)dPn+1 convexity of entropy udPn+1 log udPn+1 + EntPn+1 (u)dPn+1 + EntPi (u)dPn+1 � � · � � � � � � n+1 � udPn+1 · log udPn+1 + EntPi (u)dPn+1 . = ���� = ���� ≤ ���� � � = ≤ � n � i=1 � n � i=1 i=1 By definition of entropy, EntPn...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
· �� . i=1 Moreover, n � � � φ −λ Z − Z i · �� Ee λZ i=1 = E n � λZ φ � e � −λ Z − Z i · i=1 ⎛ � �� ⎜ � ⎝I Z ≥ Z i + I Z i · � �� � I � � ⎞ �⎟ ≥ Z ⎠ � �� II n ⎛ � ⎜ λZi � = E ⎝e � i=1 �� � � · I Z ≥ Z i φ −λ · Z i − Z �� switch Z and Zi in II ⎛ � ⎜ λ(Zi ⎝e · � � I Z ≥ Z i · � ��...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, xn be independent random 1, · · · , x� be their independent copies, Z = Z(x1, variables, x� · · · and φ(x) = ex − x − 1. We have EeλZ − EeλZ log EeλZ ≤ EeλZ , xn), Z i = (x1, , xi−1, x� , xn), � n =1 φ(−λ(Z − Z i)). We will use the ten­ i sorization of entropy technique to prove the following Hoeffding-type in...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
follows that 0 ≤ Z − Z i ≤ � � ◦ − fi ◦ − ai = fi fj ◦ − ai ≤ bi(f ◦) − ai(f ◦). i j=i Since φ(x) = e x−x−1 is increasing in R and limx → x2 x2 φ(x) 0 x2 → 1 , it follows that ∀x < 0, φ(x) ≤ 1 x2, and 2 2 Ee λZ λZ − Ee λZ log Ee λZ ≤ Ee λZ � � � φ −λ(Z − Z i) i ≤ ≤ Center Z, and we get 1 Ee λZ � 2 ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
1 t 0 �� log F (t) dt t F (λ) ≤ exp( 1 2 Lλ2). By Chebychev inequality, and minimize over λ, we get P(Z ≥ EZ + t) ≤ e−λtEe λ(Z−EZ) ≤ e−λt 2 e 1 Lλ2 minimize over λ P(Z ≥ EZ + t) ≤ e−t /(2L) 2 Let fi above be Rademacher random variables and apply Hoeffding’s inequality, we get P(Z ≥ EZ+ Lt/2) ≤ e−t . As a...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� i fj . It follows that 0 ≤ Z − Z i ≤ fi ◦ ≤ 1. Since φ = e − x − 1 is a convex function of x, x φ(−λ(Z − Z i)) = φ(−λ (Z − Z i) + 0 (1 − (Z − Z i))) ≤ (Z − Z i)φ(−λ) · · 113 Lecture 41 Application of the entropy tensorization technique. 18.465 and � E λZeλZ − Ee λZ log Ee λZ ≤ E e λZ � � n � � � φ −λ(...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
( F (λ − φ(−λ)) (log F (λ))� − log F (λ) ≤ vφ(−λ) λ − log F (λ) ≤ vφ(−λ). Solving the differential equation ⎛ ⎞ � (41.2) (λ − φ(−λ)) ⎝log F (λ)⎠ − log F (λ) = vφ(−λ), ⎜ � �� Ψ0 ⎟ � λ � �� Ψ0 � yields Ψ0 = v φ(λ). We will proceed to show that Ψ satisfying 41.1 has the property Ψ ≤ Ψ0: · 1 eλ −1 λ (e −1)(1−...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, we get P (Z ≥ EZ + t) ≤ e−v·h(t/v) where h(x) = (1 + x) log(1 + x) − x. · The following sub-additive increments bound can be found as Theorem 2.5 in Olivier Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms. PhD thesis, Ecole Polytechnique, 2002. 1...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
λ)) (Z − Zk) e λZ + e λZk − e λZ = f (λ) (Z − Zk) e λZ + g(Z − Zk)e λZk . In the above, g(x) = 1 − eλx + (λ − f (λ)) xeλx, and we define f (λ) = 1 − eλ + λeλ / eλ + α − 1 where � � � � α = 1/ (1 + u). We will need the following lemma to make use of the bound on the variance. Lemma 41.4. For all x ≤ 1, λ ≥ 0 and α...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
� (λ − f (λ)) F �(λ) F (λ) � �� � Ψ�(λ) − log F (λ) � �� � Ψ(λ) f (λ) � αnσ2 + EZ � . ≤ Solve this inequality, we get F (λ) ≤ evψ(−λ) where v = nσ2 + (1 + u)EZ. � 116 Lecture 42 Stein’s method for concentration inequalities. 18.465 This lecture reviews the method for proving concentration inequaliti...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) E (h(X)f (X)) = E (h(X) · E (F (X, X �)|X)) = E (h(X) · F (X, X �)) X,X � are exchangeable = E (h(X �) F (X �, X)) · F (X,X �) is anti-symmetric = −E (h(X �) F (X, X �)) · = 1 2 E ((h(X) − h(X �)) · F (X, X �)) . Take h(X)=1, we get Ef (x) = 0. Take h(X) = f (X), we get Ef 2 = 1 E ((f (X) − f (X �)) F (X, X ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
R 1 e0 = b+t(a−b) � � 1 λf (X) + e λf (X �) e 2 � |λ| · E e ⎛ � 1 � · � � 2 λf (X) b b )+e a+(1−t)e )dt= 1 a 2 (e (f (X) − f (X �)) F (X, X �) � � � � � · · � � � � � (f (X) − f (X �)) F (X, X �) |λ| · E e λf (X) · E (f (X) − f (X �)) F (X, X �) · �� 1 � � � 2 � ⎜ ⎜ ⎜ ⎜ ⎝ � �� Δ(X) � ≤ |λ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
) ≤ λ · C (1 − λB) � λ 0 1 − s C · · s B � ds ≤ C � λ · 1 − λ B 0 sds = 1 C λ2 · · λ B 1 − 2 · . By Chebyshev’s inequality P(f (x) ≥ t) ≤ exp −λt + 1 · 2 1 � . , and P(f (x) ≥ t) ≤ exp − · 2 t 2 (C+Bt) we get λ = t C+Bt � λ2 · C · λ B − � 1 , . Minimize the inequality over 0 ≤ λ < B � We ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
method for concentration inequalities. 18.465 Example 42.3. Let (aij )i,j=1, uniformly distributed over the permutations of 1, i,j ai,j , and our goal is to bound |X − EX| probabilistically. To apply the above theorem, we define exchangeable pairs i=1 ai,π(i), then EX = 1 , n. Let X = ,n be a real matrix where ai...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
, X �)|X) n � 2 n 1 � 2 n � = · I = ai,π(i) − aI,π(I) + 1 � n ai,j Δ(X) def. = · i,j = i X − EX � � n 1 (X − X �)2 |π E 2 2 �� n E aI,π(I) + aJ,π(J) − aI,π(J) − aJ,π(I) 4 E � aI,π(I) + aJ,π(J) − aI,π(J) − aJ,π(I)|π � n 2 = X + EX = ≤ � �2 |π = f (X) + 2EX. Apply the theore...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
around tanh (β m(σ) + β h) where tanh(x) = , n}, and generate σ� according · · · · · I � 119 � Lecture 42 Stein’s method for concentration inequalities. 18.465 P(σ� = +1|{σj : j = i}) = i 2 (exp( β n · � j=i � exp( β n j=i σj + β h) � · β · σj + β h) + exp(− n � σj − β h)) · j=i P(σ� i = −1|{σj : ...
https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf
MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 6.080/6.089 GITCS Feb 07, 2008 Lecturer: Scott Aaronson Scribe: Mergen Nachin Lecture 2 Administrative...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
down a set of simple, clear rules that one can repeatedly apply to construct complicated objects. We also talked about Euclid’s GCD algorithm, which was one of the first non-trivial algorithms known to humankind. Here is some digression. Area of a circle is A = πr2 . It’s obvious that the area of a circle should go...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
it to logical scrutiny itself? The credit for being the first logician is usually given to Aristotle, who formalized the concept of the syllogism. All men are mortal, Socrates is man, therefore Socrates is a mortal. This is a syllogism. In more modern language, we call it transitivity of implications. In general, a...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
J and the 2, not the 5. 80-90 percent of college students get this wrong. On the other hand, suppose you ask people the following: you say, you’re a bouncer in a bar, and you want to make sure the rule ”If you are under 21 the you are not drinking”. Who do you have to check to test this rule: someone who is drinking...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
also the sentence that’s talking about them. These three sentences you can think of as meaningless pieces of code, the sentence is addressed to us; it’s telling us one of the rules of the code. Was Aristotle the first person in history to apply such an inference? Obviously he wasn’t. As we all see in our everyday li...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
and uncertain, and people don’t agree how much weight different facts entered into evidence should be assigned. On top of that, the laws themselves are necessarily vague, and people disagree in their preferred interpretation of the law. Nevertheless, this idea of Leibniz that we could automate even part of human thou...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
do this sort of thinking every morning: ”My socks go on my feet, these are my socks, therefore these go on my feet.” Yet suppose you strung together hundreds or thousands of these baby steps. Then maybe you’d end up with the most profound thought in the history of the world! Conversely, if you consider the most pro...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
then A ⇒ ¬A is valid. But ¬A ⇒ A is not valid. Similarly if we assign A=true then A ⇒ ¬A is not valid. Now consider the following example. • A ⇒ B 2-3 • ¬C ⇒ A • ¬A ⇒ ¬C • B ⇒ ¬A Can these sentences simultaneously be satisfied? I.e. is there some way of setting A,B,C,D to ”true” or ”false” that satisfies all fo...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
A ⇒ C actually means C is reachable from A. Start with A=true and if we reach ¬ A then it means A ⇒ ¬A. If we also end up connecting ¬A and A, in other words if we have cycle, then we have discovered a contradiction. What we’re talking about are two properties of logical systems called ”soundness” and ”com­ pletenes...
https://ocw.mit.edu/courses/6-080-great-ideas-in-theoretical-computer-science-spring-2008/33742c58e0e02d6ae058fc87c4b981d7_lec2.pdf
Introduction to Simulation - Lecture 10 Modified Newton Methods Jacob White Thanks to Deepak Ramaswamy, Jaime Peraire, Michal Rewienski, and Karen Veroy Outline • Damped Newton Schemes – Globally Convergent if Jacobian is Nonsingular – Difficulty with Singular Jacobians • Introduce Continuation Schemes – Problem with...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
be used to find the zero of the function provided you all ready know the answer. Need a way to develop Newton methods which converge regardless of initial guess! SMA-HPC ©2003 MIT Non-converging Case f(x) 1-D Picture 1x X 0x Limiting the changes in X might improve convergence SMA-HPC ©2003 MIT Newton Method wit...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
x k 1 + ) Method Performs a one-dimensional search in Newton Direction SMA-HPC ©2003 MIT Newton Method with Limiting Damped Newton Convergence Theorem If a ) b ) J − F J F Then 1 k x ( ( ) x ) − ≤ β ( Inverse is bounded ) J F ( y ) ≤ (cid:65) x − y ( Derivative is Lipschitz Cont ) There exists a set of ( ) F x ( F...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
I r − 1 10 V r = 0 V d I e ( s V t I d − 1) − = 0 Nodal Equations with Numerical Values ( f v 2 ) = ) 1 ( − v 2 10 ( 2 0 v − ) + 10 − 16 ( e 0.025 − = 1) 0 Newton Method with Limiting Damped Newton Example cont. ( f v 2 ) = ) 1 ( − v 2 10 ( 2 0 v − ) + 10 − 16 ( e 0.025 − = 1) 0 Newton Method with Limiting 0 x Repe...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
k J (cid:8)(cid:11)(cid:11)(cid:9)(cid:11)(cid:11)(cid:10) Newton Direction Multidimensional Mean Value Lemma ( F y ( ) F x )( − − − J y x ) ( F y ) ≤ (cid:65) 2 2 x − y Combining k ( F x ) 1 + − F ( k x ) + J F ( ) k k x α ⎡ ⎢ ⎣ 1 − J F ( k x ) k ( F x ) ≤⎤ ⎥ ⎦ (cid:65) 2 k α J F 1 − ( k x ) F ( k x ) 2 SMA-HPC ©2003...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
) ( ) ) x (cid:65) 1 + 2 k k 2 β 2 k ) 2 ⎤ ⎥ ⎦ ⎡ ⎢ ⎣ Yields a quadratic in the damping coefficient SMA-HPC ©2003 MIT Newton Method with Limiting Damped Newton Theorem Proof-Cont-II Simplifying quadratic from previous slide ⎡ ( kF x 1 ⎢ ⎣ ( k + α α ( F x 2 β 2 ≤ − ) ) (cid:65) 1 + 2 k k ) ⎤ ⎥ ⎦ k ( F x ) Two C...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
( F x ≤ ) 0 not yet a convergence theorem γ independent from k ( kF x (cid:65) For the case where 2 β 2 1 ( kF x Note the proof technique 2 2 β − 1 (cid:65) ) ) > 1 2 ≤ 1 − 1 2 2 β (cid:65) 0 ( F x ) ≤ 0 γ First – Show that the iterates do not increase Second – Use the non-increasing fact to prove convergence SMA-HPC...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
initial guess – Generate a sequence of problems – Make sure previous problem generates guess for next problem • Heat-conducting bar example 1. Start with heat off, T= 0 is a very close initial guess Increase the heat slightly, T=0 is a good initial guess 2. Increase heat again 3. SMA-HPC ©2003 MIT Continuation Schemes...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
λ s ) = 0 ) λ , (cid:4) ( f v ∂ v ∂ = i ∂ diode v ∂ ( ) v 1 + ← R Not λ dependent! f (cid:71)(cid:4) ( ,F x λ = ) xf f y ( ( Lf ) x y = , ) x y , 0 fλ+ = 0 l Source/Load Stepping Does Not Alter Jacobian SMA-HPC ©2003 MIT Continuation Schemes Jacobian Altering Scheme Description ( (cid:4) F x ) ( λ λ λ = ) , ) ( ( ) ...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
( λ prev = x ) ( λ λ λ = , x Else 1 2 , δλ λ = λ δ + λ prev + δλ δλ δ = 2 λ , δλ = } SMA-HPC ©2003 MIT Continuation Schemes Jacobian Altering Scheme Initial Guess for each step. ( ) x λ ) ( x λ δλ+ Initial Guess Error ( ) 0x + λ δλ ( ) λ = x 0 λ δλ+λ SMA-HPC ©2003 MIT λ 1 Continuation Schemes Jacobian Altering Schem...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
puted SMA-HPC ©2003 MIT Continuation Schemes Jacobian Altering Scheme Update Improvement Cont. II. 0 x ( ) + λ δλ = x ( ) λ (cid:4) ( F x ∂ ) ( ) , λ λ x ∂ ⎛ − ⎜ ⎜ ⎝ ⎞ ⎟ ⎟ ⎠ 1 − (cid:4) F ∂ ( x ) ( ) , λ λ λ ∂ δλ Graphically ( ) x λ ) 0x λ δλ+ ( 0 λ δλ+λ λ 1 SMA-HPC ©2003 MIT Continuation Schemes ( ) x λ Jacobian A...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
1 + x x − k 1 + λ λ − k ⎤ ⎥ ⎡ ⎥ ⎢ ⎥ ⎣ ⎥ ⎦ = ⎤ ⎥ ⎦ ( (cid:4) F x ,k k λ ) ( λ prev ⎤ ⎥ ⎥ ⎥ ⎦ ) 2 2 − arc 2 k ( − λ λ prev 2 ) + k x − x − ⎡ ⎢ ⎢ ⎢ ⎣ SMA-HPC ©2003 MIT Continuation Schemes )x λ ( 0 Jacobian Altering Scheme Arc-length Turning point What happens here? λ 1 Upper left-hand Block is singular SMA-HPC ©2003 MIT...
https://ocw.mit.edu/courses/6-336j-introduction-to-numerical-simulation-sma-5211-fall-2003/33912d2b3446ff62fcd51b7238469f10_lec10.pdf
6.867 Machine learning, lecture 8 (Jaakkola) 1 Lecture topics: • Support vector machine and kernels • Kernel optimization, selection Support vector machine revisited Our task here is to first turn the support vector machine into its dual form where the exam­ ples only appear in inner products. To this end, assume ...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
0 means that all the components αt are non-negative. Let’s try to see first that J(θ, θ0) really is equivalent to the original problem. Suppose we set θ and θ0 such that at least one of the constraints, say the one corresponding to (xi, yi), is violated. In that case � � − αi yi(θT φ(xi) + θ0) − 1 > 0 (4) for any...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
≥0 θ,θ0 (6) The left hand side, equivalent to minimizing Eq.(5), is known as the primal form, while the right hand side is the dual form. Let’s solve the right hand side by first obtaining θ and θ0 as a function of the Lagrange multipliers (and the data). To this end d dθ0 d dθ J(θ, θ0; α) = − αtyt = 0 J(θ, θ0...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
quadratic optimization problem. The constraints are simpler, however. Moreover, the dimension of t=1 Cite as: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].(cid:13)(cid:10) 6.867 ...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
t)T φ(x)] + θˆ 0 t∈SV (14) (15) (16) where SV is the set of support vectors corresponding to non-zero values of αt. We don’t know which examples (feature vectors) become as support vectors until we have solved the optimization problem. Moreover, the identity of the support vectors will depend on the feature map...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
� γˆgeom = n n � � � αˆiαˆjyiyj K(xi, xj ) −1/2 4 (18) i=1 j=1 Would it make sense to compare geometric margins we attain with different kernels? We could perhaps use it as a criterion for selecting the best kernel function. Unfortunately this won’t work without some care. For example, if we multiply all the f...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
larger values for constraints that are harder to satisfy. Without any upper limit, they would simply reach ∞ for any constraint that cannot be satisfied. The limit C specifies the point when we should stop from trying to satisfy such constraints. More formally, the dual form is n � n n � � αt − (1/2) αiαj yiyj [φ(...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
we are faced with the problem of selecting an appropriate kernel function. A step in this direction might be to tailor a particular kernel a bit better to the available data. We could, for example, introduce additional parameters in the kernel and optimize those parameters so as to improve the performance. These pa...
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
: Tommi Jaakkola, course materials for 6.867 Machine Learning, Fall 2006. MIT OpenCourseWare (http://ocw.mit.edu/), Massachusetts Institute of Technology. Downloaded on [DD Month YYYY].(cid:13)(cid:10)
https://ocw.mit.edu/courses/6-867-machine-learning-fall-2006/33a6c8e66c62602f9f03ab6a2c632eed_lec8.pdf
MIT OpenCourseWare http://ocw.mit.edu 18.969 Topics in Geometry: Mirror Symmetry Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MIRROR SYMMETRY: LECTURE 9 DENIS AUROUX 1. The Quintic (contd.) To recall where we were, we had (1) Xψ = {(x0 : · · ·...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
5 ψ)5n (n!) T0 In terms of z = (5ψ)−5, the period is proportional to (5) φ0(z) = ∞ (5n)! n � z (n!)5 n=0 � � cnzn) = ncnzn, we obtained the Picard-Fuchs equation d Setting Θ = z dz : Θ( (6) θ4φ0 = 5z(5Θ + 1)(5Θ + 2)(5Θ + 3)(5Θ + 4)φ0 Proposition 1. All periods Ωˇ ψ satisfy this equation. � Note that a...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
C). on X which is ideally a 3-form on X, but is at least a class in H 3(X, � 1 πi S1 φ(z)dz. Recall from complex analysis, if φ(z) has a pole at 0, res0(φ) = 2 Now, let’s say that we have a 3-cycle C in X: we can associate a “tube” 4-cycle in P4 which is the preimage of C in the boundary of a tubular neighborhood ...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
dxj ∧ · · · ∧ dx4 with deg (g0 · · · g4) = 5� − 4, then (13) dφ = � � 1 � ∂fψ � ∂gj gj ∂xj f �+1 ∂xj ψ − fψ j j � Ω MIRROR SYMMETRY: LECTURE 9 3 In particular, if we have something of the form ( � gj ∂fψ ∂xj ) Ω +1 (the Jacobian ideal f � ψ ∂fψ is the span of { ∂xi }), it can be written as somethi...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
(z) Θf (z) . . . Θs−1f (z) ⎞ ⎟ ⎟ ⎠ 1 −Bs−1(z) . . . 0 · · · The fundamental theorem of these differential equations states that there exists a constant s × s matrix R and an s × s matrix of holomorphic functions S(z) s.t. (16) Φ(z) = S(z) exp((log z)R) = S(z)(id + (log z)R + log2 z 2 R2 + · · · ) is a fundamenta...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
= e ⎛ ⎜ 2πiR ⎜ 0 = ⎝ 0 0 1 2πi 1 0 0 (2πi)2 2 2πi 1 0 ⎞ (2πi)3 6 (2πi)2 ⎟ ⎟ 2 2πi ⎠ 1 � If ω(z) = β Ωˇ ψ is a period, then it is a solution of the Picard-Fuchs equation, and thus a linear combination of Φ(z)1i’s. There exists a basis b1, . . . , b4 of H3( ˇX, C) � s.t. ψ = Φ(z)1i. The monodromy action...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
j (f (z) log z) = (Θj f ) log z + j(Θj−1f ) If we write F (x) = x4 − 5z � 4 =1(5x + j), then j (21) Dφ1(z) = F (Θ)(φ0(z) log z + φ˜(z)) = (F (Θ)φ0) log z + F �(Θ)φ0 + F (Θ)φ˜ Since 0 = Dφ0 = Dφ1, we find Dφ˜(z) = −F �(Θ)φ0(z). This gives a recurrence relation on the coefficients of φ˜(z), and one obtains: � 1 n z ...
https://ocw.mit.edu/courses/18-969-topics-in-geometry-mirror-symmetry-spring-2009/33dfafe4343fb216e890aa96617797e3_MIT18_969s09_lec09.pdf
Substitution of Power Series We can find the power series of e−t by starting with the power series for ex and making the substitution x = −t2 . 2 e x = 1 + x + 2 x 2! + e−t2 = 1 + (−t2) + + · · · (R = ∞) + (−t2)3 3! + · · · 3 x 3! (−t2)2 2! t6 3! = 1 − t2 + t4 2! − + · · · The signs of the terms a...
https://ocw.mit.edu/courses/18-01sc-single-variable-calculus-fall-2010/33f76c6f2abf4190beb0a0cf29065926_MIT18_01SCF10_Ses100d.pdf
L2: Combinational Logic Design L2: Combinational Logic Design (Construction and Boolean Algebra) (Construction and Boolean Algebra) Acknowledgements: Materials in this lecture are courtesy of the following sources and are used with permission. Prof. Randy Katz (Unified Microelectronics Corporation Distinguished Prof...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
Introductory Digital Systems Laboratory 3 NMOS Device Characteristics NMOS Device Characteristics body source polysilicon gate drain gate n+ n+ n+ p inversion layer� channel gate oxide p+ n D ID -4 x 10 6 5 4 ) A ( 3 I D 2 1 0 0 VGS= 2.5 V Resistive Saturation VGS= 2.0 V VGS= 1.5 V VGS= 1.0 V 0.5 1 1.5 2...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
G G IN ID n Vin = 0 Vin = 2.5 PMOS Vin = 0.5 Vin = 2 NMOS Vin = 1 Vin = 1.5 Vin = 1.5 Vin = 1 Vin = 1.5 Vin = 2 Vin = 2.5 Vin = 1 Vin = 0.5 Vin = 0 Vout CMOS gates have: (cid:131) Rail-to-rail swing (0V to VDD) (cid:131) Large noise margins (cid:131) “zero” static power dissipation 2.5 2 1.5 ) V ( t u o V 1 0.5 0 0 0.5...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
0 X AND Y 0 0 1 0 X In general, there are 2 (2^n) functions of n inputs L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 8 Common Logic Gates Common Logic Gates Gate Symbol Truth-Table Expression NAND AND NOR OR X Y X Y X Y X Y Z Z Z Z X 0 0 1 1 X 0 0 1 1 X 0 0 1 1 X 0 0 1 1 Y 0 1 0 1 Y 0 1 0 1 Y 0 1 0 1 ...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
Generic CMOS Recipe Generic CMOS Recipe Vdd A1 . . . An . . . . . . pullup: make this connection when we want F(A1,…,An) = 1 F(A1,…,An) pulldown: make this connection when we want F(A1,…,An) = 0 Note: CMOS gates result in inverting functions! (easier to build NAND vs. AND) A B A B A PDN B PUN O CL A B PDN PUN O 1 0 ...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
(X • Z) 8D. X + (Y • Z) = (X + Y) • (X + Z) (cid:132) Uniting: 9. X • Y + X • Y = X 9D. (X + Y) • (X + Y) = X (cid:132) Absorption: 10. X + X • Y = X 11. (X + Y) • Y = X • Y 10D. X • (X + Y) = X 11D. (X • Y) + Y = X + Y L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 12 Theorems of Boolean Algebra (II) T...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
(X1,X2,...,Xn,0,1,+,•) ⇔ f(X1,X2,...,Xn,1,0,•,+) L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 13 Simple Example: One Bit Adder Simple Example: One Bit Adder (cid:132) 1-bit binary adder (cid:134) inputs: A, B, Carry-in (cid:134) outputs: Sum, Carry-out A B Cin S Cout A B Cin S Cout Sum-of-Products Can...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
B S = A B Cin + A B Cin + A B Cin + A B Cin =( A B + A B )Cin + (A B + A B) Cin =(A ⊕ B) Cin + (A ⊕ B) Cin = A ⊕ B ⊕ Cin L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 15 SumSum--ofof--Products & Product Sum Products & Product--ofof--Sum (cid:132) Product term (or minterm): ANDed product of literals –...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
0 0 0 1 1 1 1 B 0 0 1 1 0 0 1 1 C 0 1 0 1 0 1 0 1 maxterms A + B + C M0 A + B + C M1 A + B + C M2 A + B + C M3 A + B + C M4 A + B+ C M5 A + B +C M6 A +B + C M7 short-hand notation for maxterms of 3 variables F in canonical form: F(A, B, C) = ΠM(0,2,4) = M0 • M2 • M4 = (A + B + C) (A + B + C) (A + B + C) canonical form ...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
list the indices not already used in F E.g., F(A,B,C) = Σm(3,4,5,6,7) F'(A,B,C) = Σm(0,1,2) = ΠM(0,1,2) = ΠM(3,4,5,6,7) 4. Minterm expansion of F to Maxterm expansion of F': rewrite in Maxterm form, using the same indices as F E.g., F(A,B,C) = Σm(3,4,5,6,7) F'(A,B,C) = ΠM(3,4,5,6,7) = ΠM(0,1,2) = Σm(0,1,2) L2: 6.111 Sp...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
ube X XYZ 111 01 Y 00 X 0111 XY 11 10 2-cube 1011 WXYZ 1111 011 010 Y Z 001 000 X 100 3-cube 0011 1010 110 0010 101 Y Z 0001 0000 W X 0110 0101 1000 0100 4-cube 1110 1001 1101 1100 L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 19 Mapping Truth Tables onto Boolean Cube...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
A The on-set is completely covered by the combination (OR) of the subcubes of lower dimensionality - note that “111” is covered three times L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 20 Higher Dimension Cubes Higher Dimension Cubes 011 111 010 B C 110 001 101 000 A 100 F(A,B,C) = Σm(4,5,6,7) on-set...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
0 1 1 0 2 3 A 0 0 1 1 B 0 1 0 1 F 1 0 1 0 (cid:132) Numbering scheme based on Gray–code (cid:134) e.g., 00, 01, 11, 10 (only a single bit changes in code for adjacent map cells) 2-variable K-map 0 1 0 1 2 3 B A 0 1 AB A C 00 01 11 10 3-variable K-map 0 1 0 1 6 7 4 5 2 3 B AB CD 00 01 11 ...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
) = Σm(0,4,5,7) F' simply replace 1's with 0's and vice versa F = F'(A,B,C) = Σm(1,2,3,6) F' = L2: 6.111 Spring 2006 Introductory Digital Systems Laboratory 23 Four Variable Karnaugh Four Variable Karnaugh MapMap AB CD A 00 01 11 10 00 1 01 0 C 11 1 10 1 1 0 1 1 D 0 1 1 1 0 0 1 1 B F(A...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
1 1 11 1 0 1 1 C 10 0 X X X 0 0 B 0 1 0 0 D In PoS form: F = D (A + C) Equivalent answer as above, but fewer literals F(A,B,C,D) = Σm(1,3,5,7,9) + Σd(6,12,13) F = A D + B C D w/o don't cares F = C D + A D w/ don't cares By treating this DC as a "1", a 2-cube can be formed rather than one 0-cube AB...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
fix it, cover it up with another grouping or product term! A C B AB C 00 01 11 10 0 0 1 0 0 1 1 1 1 0 F F = A * C + B * C + A * B Figure by MIT OpenCourseWare. (cid:131) In general, it is difficult to avoid hazards – need a robust design methodology to deal with hazards. L2: 6.111 Spring 2006 Introductory Digital Syst...
https://ocw.mit.edu/courses/6-111-introductory-digital-systems-laboratory-spring-2006/340fab3bd54502c96ef177100c99b0f8_l2_combi_logic.pdf
2.3.2 Swollen (coil) polymers in good solvents Most of the terms in the trial free energy of Eq. (2.49) have definite sign. The exception is the term proportional to N 2(a/R)3 which has opposing contributions from the repulsive 1/2). The sign of this term and attractive parts of the potential, and is proportional to (χ ...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
treatment is not trivial, and one of the triumphs of renormalization group theory is to estimate the exact value of ν = 0.591 . . . , remarkably close to the Flory approximation of 3/5. While not directly relevant to real polymers, it is possible to inquire about the exponent ν for self-avoding walks in d-spatial dimen...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
)3. The leading terms in the expansion of the variational free energy can now be recast as − ln Z(ρ) N − ln g + = − 2χ 1 − 2 ρ + ρ2 6 + higher order terms. (2.54) The optimal density for T < θ is obtained by minimizing the above free energy, leading to 1 N d ln Z dρ − = 1 2 − (cid:18) χ + (cid:19) ρ 3 + , · · · ρ = 3 χ...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
cooling of liquids typically leads to frozen states with even lower entropy. We may thus inquire if such a freezing transition also exists for polymers. → 43 2.3.4 The Random Energy Model (REM) for compact heteropolymers Deep in the globular phase, the states of the compact polymer can be visualized as the collection ...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
and as long as the number of terms NB in Eq. (2.58) is large, taken from a Gaussian distribution. The mean and variance of the distribution are given by Eαi h = NBh Nε0 Vabi ≡ 1)N (of the z per each site of the lattice, one is polymeric), we E2 αic = NBh V 2 abic ≡ Nσ2 , (2.60) h , where noting that NB = (z have folded...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
S(Ec) = 0 = ⇒ Ec N = ε0 − σ 2 ln g′ . (2.63) (Note the connection to the extreme value problem studied earlier: Ec is also the mean value of the lowest of g′N energies randomly selected from p(E).) The singularity of entropy at Ec signifies a phase transition into a glassy state, at a temperature Tc given by p 1 Tc = dS...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
energy (En < Ec) representing the native configuration. With the added state at En, the system makes a transition to the native state (i.e. folds) at a temperature Tf , high enough that there are still many equivalent states to explore. The location of Tf , and the corresponding energy Ef , can be obtained by equating f...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf
at Ef for β N. ≥ ≤ which after taking the logarithm leads to the tangent rule in Eq. (2.65). e−βf En = Ω(Ef )e−βf Ef , (2.67) We can eliminate Ef in terms of βf by noting that E = Nε0− Using these expressions and defining a quantity βn = (En − reduces to Nσ2β, and ln g′ = (βcσ)2/2. Nε0)/(Nσ2), the above equation β2 f β2...
https://ocw.mit.edu/courses/8-592j-statistical-physics-in-biology-spring-2011/341e304796bde8c040895e83d45706ea_MIT8_592JS11_lec11.pdf