text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
− sec
(Planck’s
constant)
m =
=
e L
2 m
e
e h
)2 m
e
2 π (
=
e h
4 π m
e
≈ 9.3 x 10 −24 amp − m2
Bohr magneton mB
(smallest unit of
magnetic moment)
Imagine all Bohr magnetons in sphere of radius R aligned. Net magnetic moment is
⎞
⎛
m m R
ρ
⎜
⎟
⎝
⎠
4
3
π
=
B
3
Avogadro’s number = 6.023 x 1026 ... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
ole Field
H =
µ0 m
4 r3
π
⎡
⎢2 cos θ
µ0 ⎣
Electric Dipole Field
_
_
⎤
+i r sin θ i θ ⎥
⎦
(multiply top & bottom by µ0 )
_
i r
_
⎤
+ sin θ i θ ⎥
⎦
E =
p
⎡
2 cos θ
4 π ε0 r3 ⎢
⎣
Analogy
p → µ0 m
P = N p ⇒ M = N m , N = # of magnetic dipoles / volume
Polarization Magnetization
II. Maxwell’s Equations wi... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
= n
P − P
⎢
⎣
b ⎤
⎥
⎦
MQS Equations
⎡
σ = n i
⎢
⎣
sm
−
a
µ (M
0
− M
)⎥
b ⎤
⎦
∇ × H = J
∇ × E = −
∂
∂t
µ0 (H + M)
B = µ (H + M) Magnetic flux density B has units of Teslas (1 Tesla = 10,000 Gauss)
0
∇ i B = 0
n i ⎡B − B ⎤ = 0
a
a
⎢⎣
⎥⎦
∇ × E = −
∂ B
∂ t
∇ × H = J
v =
dλ
dt
, λ = ∫ B i da (total flux... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
−
⎪ ⎢
⎩ ⎣
⎛
⎝
2 ⎤ 2
d ⎞
⎟ ⎥
2 ⎠ ⎥
⎦
2
⎡
⎢R + ⎜ z +
⎢
⎣
⎛
⎝
2 ⎤ 2
d ⎞
⎟ ⎥
2 ⎠ ⎥
⎦
⎫
⎪
⎪
⎪
1 + 2⎬
⎪
⎪
⎪
⎭
z >
d
2
−
d
2
< z <
d
2
IV. Toroidal Coil
N1 turns
Courtesy of Hermann A. Haus and James R. Melcher. Used with permission.
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Za... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
dλ2
dt
≈ R C
2
dV
v
2 dt
⇒ λ = R C V
v
2
2
2
)
( v
V = Vertical voltage to oscilloscope
=
π w 2
4
N B
2
N B
2
V =
v
1
π w2
R C 4
2
2
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 8
Page 7 of 13
Courtesy of Hermann A. Haus and James R. Melcher. Used with permiss... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
s
µ0 Dd
=
(length )
(permeability ) ( cross − sec tional area )
[Reluctance, analogous to resistance]
Series
Parallel
From Electromagnetic Field Theory: A Problem Solving Approach, by Markus Zahn, 1987. Used with permission.
A. Reluctances In Series
R1 =
s
1
,
µ a D
1
1
R2=
s
2
µ a D
2
2
Φ =
Ni
R R1 + ... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
Courtesy of Krieger Publishing. Used with permission.
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 8
Page 11 of 13
A. Voltage/Current Relationships
Φ =
1 1
N i N i
−
R
2 2
; R =
l
A
µ
Another way: (cid:118)∫
H dl
i = Hl = N i − 2
1 1 N i 2
C
H = 1 1
N i − N i
2 2 ... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
⎦
di
dt
dλ2
v = = +M
dt
di
1
dt
2
2
di
2 dt
− L = N L +N
⎡
2 0 ⎢
⎣
di
1
1 dt
− N
di
2 ⎤
2 dt ⎥
⎦
v1 =
N1
v2 N2
lim H ⇒ 0 ⇒ N i = N i ⇒ =
1 1
µ→∞
2 2
i1 N2
N
i
1
2
v i1 1 = 1
v i2 2
6.641, Electromagnetic Fields, Forces, and Motion
Prof. Markus Zahn
Lecture 8
Page 12 of 13
Courtes... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
Topic 2 Notes
Jeremy Orloff
2 Analytic functions
2.1 Introduction
The main goal of this topic is to define and give some of the important properties of complex analytic
functions. A function () is analytic if it has a complex derivative ′(). In general, the rules for
computing derivatives will be familiar to you from ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
0. There are lots of ways to do this. For example,
if we let Δ go to 0 along the -axis then, Δ = 0 while Δ goes to 0. In this case, we would have
On the other hand, if we let Δ go to 0 along the positive -axis then
′(0) = lim
Δ
Δ→0 Δ
= 1.
′(0) = lim
−Δ
Δ→0 Δ
= −1.
1
2 ANALYTIC FUNCTIONS
2
The limits do... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
is not part of .) In contrast, the set is not an open region. Notice the point
shown is on the boundary, so every disk around contains points outside .
Left: an open region ; right: is not an open region
2.4 Limits and continuous functions
Definition. If () is defined on a punctured disk around
0 then we say
lim ()... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
→
0
• lim ()() =
1
⋅
2.
→
0
2 ANALYTIC FUNCTIONS
• If
2
≠ 0 then lim ()∕() =
1∕
2
→
0
• If ℎ() is continuous and defined on a neighborhood of
1 then lim ℎ( ()) = ℎ(
→
0
(Note: we will give the official definition of continuity in the next section.)
4
1)
We won’t give a proof of these properti... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
an open region then the phrase ‘ is continuous on
0 and lim () = (
→
0
Example 2.5. (Some continuous functions)
(i) A polynomial
+
is continuous on the entire plane. Reason: it is clear that each power ( + ) is continuous as a
function of (, ).
() =
2 + … +
0 +
1
2
(ii) The exponential function is continuous... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
.5 The point at infinity
By definition the extended complex plane = ∪ {∞}. That is, we have one point at infinity to be
thought of in a limiting sense described as follows.
A sequence of points {} goes to infinity if (cid:240)(cid:240) goes to infinity. This “point at infinity” is approached
in any direction we go. All of... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
∞
Example 2.7. Show lim = ∞ (for a positive integer).
→∞
Solution: We need to show that (cid:240)(cid:240) gets large as (cid:240)(cid:240) gets large. Write = , then
(cid:240)(cid:240) = (cid:240)(cid:240) = = (cid:240)(cid:240)
2.5.2 Stereographic projection from the Riemann sphere
This is a lovely section and ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
That is, the small cap
around is a neighborhood of the point at infinity on the sphere!
The figure below shows another common version of stereographic projection. In this figure the
sphere sits with its south pole at the origin. We still project using secant lines from the north pole.
2.6 Derivatives
The definition of... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
depending on the angle at which approaches
0.
2.6.1 Derivative rules
It wouldn’t be much fun to compute every derivative using limits. Fortunately, we have the same
differentiation formulas as for real-valued functions. That is, assuming and are differentiable we
have:
• Sum rule:
( () + ()) = ′ + ′
• Product rule... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
2.7.1 Partial derivatives as limits
Before getting to the Cauchy-Riemann equations we remind you about partial derivatives. If (, )
is a function of two variables then the partial derivatives of are defined as
(, ) = lim
Δ→0
( + Δ, ) − (, )
,
Δ
i.e. the derivative of holding constant.
(, ) = lim
Δ→0
(, + Δ) − (... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
= lim
Δ→0
= lim
Δ→0
= lim
Δ→0
=
( + Δ) − ()
Δ
( + Δ + ) − ( + )
Δ
(( + Δ, ) + ( + Δ, )) − ((, ) + (, ))
Δ
+
( + Δ, ) − (, )
Δ
( + Δ, ) − (, )
Δ
(, )
(, ) +
Vertical direction: Δ = 0, Δ = Δ (We’ll do this one a little faster.)
′() = lim
Δ→0
= lim
Δ→0
( + Δ) − ()
Δ
((, + Δ) + (, + Δ)) − ((, ) + ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
checking that a function is differen-
tiable and computing its derivative.
Example 2.11. Use the Cauchy-Riemann equations to show that e is differentiable and its derivative
is e.
Solution: We write e
= e cos() + e sin(). So
= e+
(, ) = e cos() and (, ) = e sin().
... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
number + as a 2 × 2 matrix
+ ↔
[
]
−
.
(1)
Now if we write () in terms of (, ) we have
() = ( + ) = (, ) + (, ) ↔ (, ) = ((, ), (, )).
We have
so we can represent ′() as
′() = + ,
[
]
−
.
Using the Cauchy-Riemann equations we can replace − by and by which gives us the
representation
[
′() ↔
]
, ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
= , we use Equation 2 to see that
= and = .
Since = , we have = .
Similarly, to show = −, we compute
= and = −.
So, = −. QED.
Technical point. We’ve assumed as many partials as we need. So far we can’t guarantee that all the
partials exist. Soon we will have a theorem which says that an analytic function has de... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
all of ( is entire).
′() = 0.
3. () = ( an integer ≥ 0)
Domain = all of ( is entire).
′() = −1 .
4. () (polynomial)
A polynomial has the form () = + −1
Domain = all of ( () is entire).
′() = −1 + ( − 1)−1
−1 + … + 2
2
+
.
1
−1 + … +
0.
5. () = 1∕
Domain = − {0} (the punctured plane).
′() = −1∕2 .
6. ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
for any integer.
The zeros of cos() are = ∕2 + for any integer.
(That is, they have only real zeros that you learned about in your trig. class.)
8. Other trig functions cot(), sec() etc.
Definition. The same as for the real versions of these function, e.g. cot() = cos()∕ sin(),
sec() = 1∕ cos().
Domain: The entire ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
integer then is defined and analytic on − {0}.
= −1 .
12. sin−1()
ø
Definition. sin−1() = − log( + 1 − 2).
The definition is chosen so that sin(sin−1()) = . The derivation of the formula is as follows.
Let = sin−1(), so = sin(). Then,
e − e−
2
2 − 2e − 1 = 0
⇒ e
=
Solving the quadratic in e gives
ø
e =
2 + ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
2.9.2 A few proofs
Here we prove at least some of the facts stated in the list just above.
1. () = e. This was done in Example 2.11 using the Cauchy-Riemann equations.
2. () ≡ (constant). This case is trivial.
3. () = ( an integer ≥ 0): show ′() = −1
It’s probably easiest to use the definition of derivative directl... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
ition in terms
of exponentials.
10. log(). The derivative of log() can be found by differentiating the relation elog() = using
the chain rule. Let = log(), so e = and
e
=
= 1 ⇒
e
= 1 ⇒
e
= 1 ⇒
=
1
e
Using = log() we get
log()
1
.
=
11. (any complex ). The derivative for this follows from the formula
... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
�() = −e∕(e − 1)2. A little more formally: ℎ() = (()). where
() = 1∕ and = () = e − 1. We know that () is entire and () is analytic everywhere
except = 0. Therefore, (()) is analytic everywhere except where () = 0.
Example 2.17. It can happen that the derivative has a larger domain where it is analytic than the
ori... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
0}
So, () is analytic except where 1 + e is real and ≤ 0. That is, except where e is real and ≤ −1.
Now, e = ee is real only when is a multiple of . It is negative only when is an odd mutltiple
of . It has magnitude greater than 1 only when > 0. Therefore () is analytic on the region
− { ≥ 0, = odd multiple of }
... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
the curve shown heading towards
. The bigger circle of radius
2 captures the sequence by the time = 47, the smaller circle doesn’t
capture it till = 59. Note that
25 is inside the larger circle, but since later points are outside the
circle we don’t say the sequence is captured at = 25
Definition. The sequence
,
... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
:243)
(cid:243)
<
1
2
2
+
So all we have to do is pick large enough that
2
1
+
2
Since this can clearly be done we have proved that
<
→ .
2 ANALYTIC FUNCTIONS
20
This was clearly more work than we want to do for every limit. Fortunately, most of the time we can
apply general rules to determine a limi... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
(within ) to . Since
Remarks.
1. Using the punctured disk (also called a deleted neighborhood) means that () does not have
0) = then we
0) does not necessarily equal . If (
to be defined at
say the is continuous at
0 and, if it is then (
0.
2. Ask any mathematician to complete the phrase “For every ” and the odds... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
18.405J/6.841J: Advanced Complexity Theory
Spring 2016
(cid:47)(cid:72)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:3)(cid:20)(cid:21)(cid:29)(cid:3)(cid:53)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:76)(cid:93)(cid:72)(cid:71)(cid:3)(cid:38)(cid:82)(cid:80)(cid:80)(cid:88)(cid:81)(cid:76)(cid:70)(cid:68)(cid:87)(cid... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
n + log 1/δ) .
In other words, we can make randomness private at the small, additive logarithmic cost and a small
error (which can be boosted to only ε for a small constant factor).
Proof. Consider a public-randomness protocol P such that f (x, y) = P(x, y, r) with high prob-
ability, and let π denote the set of all st... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
1
Thus, if t ≥ n+1 ln
δ2
2, by a union bound,
(cid:34)
P
ri
∃(x, y) :
1
t
t
(cid:88)
i=1
Z(x, y,
r
i) > ε + δ
(cid:35)
22n (cid:16)
2e−2δ t(cid:17)
2
≤
< 1
so there must exist some choice of r1, . . . rt such that, for all (x, y), P (cid:48) fails with probability at most
ε + δ.
Now, π(cid:48) has only t = O(n/δ2) = p... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
(x, y, r) (cid:54)= f (x, y)]
≤ ε .
E [P(x, y, r)
(
x,y)∼µ
(cid:54)= f (
x,
y)]
≤ ε
so we can “fix our randomness” to form a deterministic algorithm.
Theorem 4. In fact, this is an equality: Rε
pub
(f ) = maxµ Dε (f ).
µ
Proof idea. This is solved by the minimax theorem or LP duality.
In analyzing deterministic communic... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
(cid:54)=
µ
P(x, y) ∧ (x, y) ∈ Rl]
so there exists some R = Rl so that
Discµ(f ) ≥ Discµ(f, R
) ≥
1
− 2
ε
m
=⇒ Dµ
ε (f ) ≥ log m ≥ log
1 − 2ε
Discµ(f )
.
Thus, we have a way to bound randomized communication complexity with discrepancy through
distributional complexity. We are now ready to prove the main theorem.
Theor... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
(cid:107)H(cid:107) denotes the largest eigenvalue of H.
1 (cid:21)
(cid:20)1
1 −1
√
3
Thus,
Discµ(f, R = S × T ) =
1
22n
(cid:88)
x,y)
(
H
(x, y)1S(x)1T (y)
=
≤
≤
1
22n (1S)T H (1T )
1
22n (cid:107) (cid:107)(cid:112)|S|(cid:112)|T |
2n/2 · 2n/2 · 2n/2
22n
H
= 2−n/2
where 1S and 1T are the indicator vectors of S and ... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
6.096 Introduction to C++
Massachusetts Institute of Technology
January 10, 2011
John Marrero
Lecture 4 Notes: Arrays and Strings
1
Arrays
So far we have used variables to store values in memory for later reuse. We now explore a
means to store multiple values together as one unit, the array.
An array is a fixed ... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
array can also be initialized with values that are not known beforehand:
1 #include <iostream>
2 using namespace std;
3
4 int main() {
5
6
7
8
9
10
11
for(int i = 0; i < 4; i++)
cin >> arr[i];
int arr[4];
cout << “Please enter 4 integers:“ << endl;
12
13
14
15
16
17
18
19
20 }
cout << “Values in array a... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
reference and so any changes made to the
array within the function will be observed in the calling scope.
C++ also supports the creation of multidimensional arrays, through the addition of more than
one set of brackets. Thus, a two-dimensional array may be created by the following:
type arrayName[dimension1][dimens... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
provided when initializing multidimensional arrays, as it
is otherwise impossible for the compiler to determine what the intended element partitioning
is. For the same reason, when multidimensional arrays are specified as arguments to
functions, all dimensions but the first must be provided (the first dimension is o... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
Here is an example to illustrate the cctype library:
1 #include <iostream>
2 #include <cctype>
3 using namespace std;
4
5 int main() {
6
7
8
9
10
11
12
13
14
15
16
17
18 }
cout << endl;
return 0;
}
char messyString[] = "t6H0I9s6.iS.999a9.STRING";
char current = messyString[0];
for(int i = 0; current != '\0';... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
14
15
16
17 }
cout << finalString;
return 0;
strcpy(fragment3, fragment1);
strcat(finalString, fragment3);
strcat(finalString, fragment2);
This example creates and initializes two strings, fragment1 and fragment2. fragment3 is
declared but not initialized. finalString is partially initialized (with ... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.006 Introduction to Algorithms
Spring 2008
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 5
Hashing I: Chaining, Hash Functions
6.006 Spring 2008
Lecture 5: Hashing I: Chaining, Hash Functions
Lecture Overview
• D... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
count_frequency(word_list):
D = {}
for word in word_list:
if word in D:
D[word] += 1
else:
D[word] = 1
• new docdist7 uses dictionaries instead of sorting:
def inner_product(D1, D2):
sum = φ. φ
for key in D1:
if key in D2:
sum += D1[key]*D2[key]
=
⇒
optimal Θ(n) document distance assuming dictionary ops. take O(1) t... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
) = 64 = hash(‘\φ \ φC’)
• Object’s key should not change while in table (else cannot find it anymore)
• No mutable objects like lists
3
φ12keykeykeyitemitemitem...Lecture 5
Hashing I: Chaining, Hash Functions
6.006 Spring 2008
Solution 2 : hashing (verb from ‘hache’ = hatchet, Germanic)
• Reduce universe U of a... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
simple uniform hashing
The performance is likely to be O(1 + α) - the 1 comes from applying the hash function
and access slot whereas the α comes from searching the list. It is actually Θ(1 + α), even
for successful search (see CLRS ).
Therefore, the performance is O(1) if α = O(1) i. e. m = Ω(n).
5
1....Ukkkkk123... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
between 2(w − 1) and 2w .
Good Practise: a not too close to 2(w−1) or 2w .
Key Lesson: Multiplication and bit extraction are faster than division.
Figure 4: Multiplication Method
6
wkaxr} | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
6.776
High Speed Communication Circuits
Lecture 1
Communication Systems Overview
Profs. Hae-Seung Lee and Michael H. Perrott
Massachusetts Institute of Technology
February 1, 2005
Copyright © 2005 by H.-S. Lee and M. H. Perrott
Modulation Techniques
(cid:131) Amplitude Modulation (AM)
-Standard AM
-Double-sideband (D... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
of modulated sine wave no longer corresponds
directly to the baseband signal
- Envelope instead follows the absolute value of the
baseband waveform, negative value of the baseband input
produces 180o phase shift in carrier
- Envelope detector can no longer be used for receiver
(cid:131) The carrier frequency tone th... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
131) Most baseband signals have no DC or very low
frequency components
(cid:131) One of the sidebands can be removed at the IF or RF
stage (much easier to filter in the IF stage)
H.-S. Lee & M.H. Perrott
MIT OCW
SSB Spectra
(cid:131) One of the sidebands is removed by sideband filter or
phase shift techniues
- Sign... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
correct problems
H.-S. Lee & M.H. Perrott
MIT OCW
Analog Modulation
(cid:131) I/Q signals take on a continuous range of values (as viewed in
the time domain)
(cid:131) Used for AM/FM radios, television (non-HDTV), and the first cell
phones
(cid:131) Newer systems typically employ digital modulation instead
H.-S. Lee... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
errors
that occur if wrong decision is made due to noise
H.-S. Lee & M.H. Perrott
MIT OCW
Impact of Noise on Constellation Diagram
(cid:131) Sampled data values no longer land in exact same location
across all sample instants
(cid:131) Decision boundaries remain fixed
(cid:131) Significant noise causes bit errors to... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
& M.H. Perrott
MIT OCW
Transitioning Between Constellation Points
Q
000
100
001
Decision
Boundaries
101
011
I
111
010
110
Decision
Boundaries
(cid:131) Constant-envelope requirement forces transitions to
allows occur along circle that constellation points sit on
- I/Q filtering cannot be done independently!
- Signifi... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
bandwidth
H.-S. Lee & M.H. Perrott
MIT OCW
Time-Division Duplexing (Half-duplex)
Switch
Antenna
Transmitter
Receiver
TX
RX
(cid:131) Use any desired frequency channel for transmitter and
switch
control
receiver
(cid:131) Send transmit and receive signals at different times
(cid:131) Allows communication directly betw... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
“chips”
(cid:131) Individual pulses represent binary data values
H.-S. Lee & M.H. Perrott
MIT OCW
Receiver Selects Desired Transmitter Through Its Code
Separate
Transmitters
x1(t)
y1(t)
Transmit Signals
Combine
in Freespace
PN1(t)
x2(t)
y2(t)
PN2(t)
PN1(t)
Lowpass
r(t)
x(t)
y(t)
Receiver
(Desired Channel = 1)
(cid:131... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
d
0
(cid:131) CDMA transmitters broaden data spectra by encoding it
PN2(t)
1/Tc
onto chip sequences (‘spread-spectrum’)
(cid:131) CDMA receiver correlates with desired transmitter code
- Spectra of desired channel reverts to its original width
- Spectra of undesired channel remains broad
(cid:131) Can be “mostly” filt... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
1) No narrowband filters (RF or baseband) needed in
tranceivers
(cid:131) Extremely tight time-synchronization is essential
Pictures Courtesy of R. Blazquez, et. al.
H.-S. Lee & M.H. Perrott
MIT OCW
OFDM UWB
(cid:131) Can take advantage of wealth of knowledge in
narrowband communication
(cid:131) Involves the usual ... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
\'nting classifiers, t,rairlirlg error of l~oosting
Sllpport vector rr~a~:hines (SVII)
Gerleralizatiorl error of SVM
One ~iirnensional concentrt~t,ion ine~l~~alities.
i3ennett's ineqllality
i3ernstein's ine~lllalit,y
Hoeffcling, Hoeffcling-Chernoff, and Kllirldlirle ineqllalities
\ ~ t ~ ~ ~ r ~ i k - C l ~... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
classifiers
i3ol1111is on t,lle gerleralizatiorl error of vot,ing classifiers
i3ol1111is on t,lle gerleralizatiorl error of vot,ing classifiers
i3ol1111is in t,errrls of sparsity
i3ol1111is in t,errrls of sparsity (example)
\lartirlgale-~iiffererlce ineqllalities
Corrlparisorl ine~lllalit,y for Ra~lerr~acher ~~ro... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
:ations of Talagranrl's corlcerltratiorl ineqllality
.ippli<:ations of Talagranrl's convex-111111 ~list,mce ine~lllality. Bin pa<:king
Entropy terlsorizatiorl ineqllalit,y. Terlsorizatiorl of Laplace t,rarlsforrrl
.ippli<:ation of tlle entropy t,erlsorizatiorl te<:hniq~~e
Stein's rnetllo~i for con<:er~t,ratio~~
in... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
5
1
-1
,
sign(f ) =
1 1
1
-1
AdaBoost
Assign weight to training examples w1(i) = 1/n.
for t = 1..T
1) find “good” classifier ht ∈ H; Error εt =
�
n
=1 wt(i)I(h(Xi) = Yi)
i
�
2) update weight for each i:
wt+1(i) =
wt(i)e−αtYiht (Xi)
Zt
Zt =
n
�
i=1
wt(i)e−αtYiht(Xi)
αt =
1
2
ln
1 − εt
εt
> 0 ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
1(Xi)
Zt
ZT −1
. . .
e−Yi
=
PT
t=1 αtht(Xi ) 1
n
�t Zt
t=1
Hence,
and therefore
�
wT +1(i) Zt = e−Yi
1
n
PT
t=1 αtht(Xi)
n
1 �
n
i=1
�
I(f (Xi) = Yi) ≤
n
1 �
n
i=1
e−Yi
PT
t=1 αt ht(Xi) =
n
T
� �
Zt
wT +1(i) =
t=1
i=1
T
�
Zt
t=1
�
wt(i)e−αtYiht(Xi)
Zt =
=
n
�
wt(i)e−αt I(ht(Xi)... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
= 2(εt(1 − εt))1/2 = 2
(1/2 − γt)(1/2 + γt)
�
1 − 4γ2
t
=
�
3
Lecture 03
Support vector machines (SVM).
18.465
As in the previous lecture, consider the classification setting. Let X = Rd , Y = {+1, −1}, and
H = {ψx + b, ψ ∈ Rd , b ∈ R}
where |ψ| = 1.
We would like to maximize over the choice of hyperp... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
αiyi = 0
�
αiyixi
ψ =
�
αiyi = 0.
4
Lecture 03
Support vector machines (SVM).
18.465
Substituting these into φ,
φ =
=
=
1 ��
2
n
�2 �
αiyixi −
⎛ ⎛
αi ⎝yi ⎝
n
�
⎞ ⎞
αj yj xj xi + b⎠ − 1⎠
i=1
j=1
αiαj yiyj xixj −
αiαj yiyj xixj − b
�
�
�
αiyi +
αi
1 �
2
i,j
�
i,j
1 �
2
αiαj yiyj xixj
αi −... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
(xi, xj ) = ∞
k=1 φk(xi)φk(xj ), a symmetric positive definite kernel.
Examples:
(1) Polynomial: K(x1, x2) = (x1x2 + 1)� , � ≥ 1.
(2) Radial Basis: K(x1, x2) = e−γ|x1−x2|2
.
1
(3) Neural (two-layer): K(x1, x2) = 1+eαx
1 x2+β for some α, β (for some it’s not positive definite).
Once αi are known, the decision func... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
is the
same as training on z1, . . . , zn+1, . . . , zn and evaluating on zi. Hence, for any i,
A.G.E. = EEzi
I(fz1,...,zn+1,...,zn
�
(xi) = yi)
and
⎡
⎢
1
⎢
A.G.E. = E
⎢
n + 1
⎢
⎣
�
⎤
n+1
�
i=1
⎥
⎥
.
I(fz1,...,zn+1,...,zn (xi) = yi)
⎥
⎥
⎦
�
��
leave-one-out error
�
Therefore, to obtain a bound on the ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
as follows.
Proof. Clearly,
L.O.O.E. ≤
# support vect.
.
n + 1
Indeed, if xi is not a support vector, then removing it does not affect the solution. Using Lemma 4.1 above,
�
I(xi is misclassified) ≤
i∈supp.vect
�
i D2 = D2
α0
i∈supp.vect
In the last step we use the fact that
�
α0 = 1
m
2 . Indeed, since |... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
problem of training a support vector classifier is argminψ 2
1 � �
problem is argmaxα
Kuhn-Tucker condition can be satisfied, minψ 2 ψ ∗ ψ = maxα αi − 2 � αiyixi� = 2m2 , where m is the
margin of an optimal hyperplane.
i αi − �
Proof. Define w(α) = �
1
2 �
α� = argmaxαw(α) subject to αp = 0, αi ≥ 0 for i �= p and
... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
problem, α� maximizes w(α) with a constraint
p ·
that αp = 0, thus w(α�) is no less than w(α0 − α0 γ), which is a special case that satisfies the constraints,
including αp = 0. α0 maxmizes w(α) with a constraint αp ≥ 0, which raises the constraint αp = 0, thus
w(α�) ≤ w(α0). For the primal problem, the training prob... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
2
�
� yixi
− t
�
�
�
2
= w(α�) + t · (1 − yp · (
�
� yixi
) ∗xp) −
α
i
�
��
ψ�
2
t
2
�xp� 2
�
� yixi
αi
∗ (ypxp) −
2
t
2
�ypxp� 2
and w(α� + tγ) − w(α�) = t (1 − yp ψ� ∗ xp) − t
2
t = (1 − yp · ψ� ∗ xp)/�xp�2, and
·
·
2
�xp� 2 . Maximizing the expression over t, we find
max w(α� + tγ) − w(α�)... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
vector, and yp · ψ0 ∗ xp = 1. Thus w(α0) −
1 �
w(α0 − α0 γ) = 1 �
p ·
2 and 1 (1−yp·ψ�∗xp)2
α0 �2
p �xp�
α0 �2
p �xp�
2 . Thus
≤ 2
2
�xp �2
2
α0
p ≥
� ∗ xp|
ψ
·
|1 − yp
�2
x
�
p
≥
1
.
D2
The last step above is due to the fact that the support vector classifier associated with ψ� misclassifies (xp,... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
1,
· · ·
, Zn ∈ R be i.i.d. random variables. We’re interested in bounds on 1
n
�
Zi − EZ.
(1) Jensen’s inequality: If φ is a convex function, then φ(EZ) ≤ Eφ(X).
(2) Chebyshev’s inequality: If Z ≥ 0, then P (Z ≥ t) ≤ EZ .
t
Proof:
EZ = EZI(Z < t) + EZI(Z ≥ t) ≥ EZI(Z ≥ t)
≥ EtI(Z ≥ t) = tP (Z ≥ t) .
(3) Mar... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
i=1
.
9
�
�
�
�
�
Lecture 05
One dimensional concentration inequalities. Bennett’s inequality.
18.465
Expanding,
λZ
Ee
= E
∞ (λZ)k
�
k!
k=0
∞
�
λk
=
k=0
EZ k
k!
= 1 +
EZ 2Z k−2 ≤ 1 +
M k−2σ2
∞
� λk
k!
∞ λk
�
k!
k=2
σ2 ∞
M 2
�
�
σ2
M 2
= 1 +
≤ exp
� λkM k
= 1 +
k=2
σ2
M 2
�
� ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
�
Zi ≥ t ≤ exp −
i=1
�
= exp
�
= exp
�
= exp
−
t
M
�
log 1 +
�
tM
nσ2
+
�
nσ2
tM
M 2 nσ2
�
nσ2
tM
M 2 nσ2
�
nσ2
M 2
φ
tM
nσ2
�
− log 1 +
�
− 1 +
��
tM
nσ2
�
nσ2
tM
M 2 nσ2
�
tM
nσ2
�
tM
nσ2
−
�
log 1 +
�
���
+ 1 − log 1 +
tM
nσ2
���
�
log 1 +
���
tM
nσ2
tM
nσ2
�
10 ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
) ∼ x log x.
We can weaken the bound by decreasing φ(x). Take1 φ(x) = 2+
�
�
�
2
x
2
3 x
� �
to obtain Bernstein’s inequality:
n
�
P
Xi ≥ t ≤ exp −
i=1
�
= exp −
�2 ��
tM
nσ2
nσ2
M 2
2 +
2 tM
3 nσ2
�
t2
2nσ2 + 2 tM3
= e−u
2
t
where u = 2nσ2
+ 2
3 tM
. Solve for t:
t2 −
uM t − 2nσ2 u = 0
2 ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
u +
4uM
3
.
1 �
n
Xi − EX ≤
�
2σ2u
n
+
4uM
3n
EX −
1 �
n
Xi ≤
�
2σ2u
n
+
4uM
3n
.
1exercise: show that this is the best approximation
11
Lecture 06
Bernstein’s inequality.
18.465
Whenever
�
2σ2u
n ≥ 3n
4uM , we have u ≤ nσ2
8M 2 .
So, 1 �
| n
Xi − EX �
|
�
n
2σ2u for u � nσ2 (range of ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
= Yi) = EI 2 and therefore Var(I) = σ2 = EI 2 − (EI)2 . Thus,
�
�
P (f (Xi) �= Yi) ≤
I (f (Xi) �= Yi) +
n1 �
n
i=1
�
2P (f (Xi) = Yi) u
n
�
+
2u
3n
with probability at least 1 − e−u . When the training error is zero,
P (f (Xi) = Yi) ≤
�
�
2P (f (Xi) = Yi) u
n
�
+
2u
.
3n
If we forget about 2u/3n for... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
≤
�
λ
exp
−
e
λt
E
� �
n
ε
i
a
i
n
= e−λt �
E
exp (
λε
i .
a
)
i
i=1
i=1
Using inequality
x
−x
+e
e
2
2
x≤ e
/2
(from Taylor expansion), we get
E exp (λεiai) =
1
2
e λai +
1
2
e−λai ≤ e
λ
2 2
ai .
2
Hence, we need to minimize the bound with respect to λ > 0:
�
n
�
P
�
εiai ≥ t
≤ e−λt λ2 P n... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
) − Ef ∼
1
n
n
�
i=1
f (Xi) −
1
n
n
�
f (Xi
�).
i=1
�
n
i=1
1
n
f (Xi)−
In fact,
�
�
�
�
�
�
1
f (Xi) − Ef
�) ≤ 2E
�
�
�
�
�
�
�
�
�
n
The second inequality above follows by adding and subtracting Ef :
�
�
1
≤ E
�
�
�
n
�
�
1
E
�
�
�
n
f (Xi) −
f (Xi
n
�
n
�
n
�
1
n
i=1
i=1
i=1
... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
rst inequality we use Jensen’s inequality:
�
�
1
E
�
�
�
n
n
�
i=1
�
�
f (Xi) − Ef
�
�
�
�
�
1
= E
�
�
�
n
n
�
f (Xi) −
1
n
n
�
i=1
Ef (Xi
�
�
�)
�
�
�
≤ EX EX �
i=1
�
�
1
�
�
�
n
n
�
f (Xi) −
i=1
Ef (Xi
�
�
�)
�
.
�
�
n
�
1
n
�
n
i=1
i=1
εi(f (Xi) − f (Xi
�)).
Note that 1
n
�
n f... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
µ + µe λ .
Again, we minimize the following bound with respect to
λ >
0:
�
P
n
�
X ≥ n µ
(
)+ t
i
�
i=1
Take derivative w.r.t. λ:
e−λn(µ+t)Ee λ Xi
P
e−λn(µ+t) Ee λX
�
�n
�
e−λn(µ+t) 1 − µ + µe λ
�n
≤
=
≤
−n(µ + t)e−λn(µ+t)(1 − µ + µe λ)n + n(1 − µ + µe λ)n−1 µe λ e−λn(µ+t) = 0
−(µ + t)(1 − µ + µe λ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
≥ t = P
i=1
�
1
n
n
�
i=1
�
Zi − µZ ≥ t ≤ e−nD(µz +t,µZ ) = e−nD(1−µX +t,1−µX )
where Zi = 1 − Xi (and thus µZ = 1 − µX ).
�
If 0 < µ ≤ 1/2,
Hence, we get
Solving for t,
D(1 − µ + t, 1 − µ) ≥
t2
.
2µ(1 − µ)
�
P
µ −
1
n
n
�
i=1
�
Xi ≥ t
2nt
≤ e− 2µ(1−µ) = e−u
.
�
P µ −
1
n
n
�
Xi ≥
i=1
... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
0 < p < ∞. Then
.
· · ·
, an ∈ R, �i,
· · ·
, �n be i.i.d. Rademacher random variables:
�
n
�
i=1
�1/2
2
≤
� �
n
�
�
E
�
�
�
i=1
�
p�1/p
�
�
ai�i
�
�
≤ Bp ·
2
|ai|
�
n
�
i=1
�1/2
Ap ·
|ai|
for some constants Ap and Bp depending on p.
Proof. Let
�
|ai| 2 = 1 without lossing generality. Then ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
ai�i
�
�
3
p+(2− 2
2
�
� �
E
�
�
ai�i
�
�
=
3
p)
≤
≤
� � �
2
� �
� �
� �
p 3
6−2p
E
E
�
�
�
�
ai�i
ai�i
�
�
�
�
�
p 2
� �
� �
3
E
�
�
·
ai�i
�
�
(B6−2p
)2− p
2
3
.
Thus E | ai�i|
p ≤ (B6−2p)6−2p, completing the proof.
�
�
1
3
, Holder’s inequality
�
16
Lecture 08
Vapnik-Chervonenkis ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
which we denote
VC classes of sets
uniformGC(F).
Let C = {C ⊆ X}, fC (x) = I(x ∈ C). The most pessimistic value is
sup E sup Pn (C) − P (C)
P
C∈C
|
| →
0.
For any sample {x1, . . . , xn}, we can look at the ways that C intersects with the sample:
{C ∩ {x1, . . . , xn} : C ∈ C}.
Let
�n(C, x1, . . . , xn) = ca... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�n(C, x1, . . . , xn) ≤
�
�V
en
V
for n ≥ V.
Hence, C will pick out only very few subsets out of 2n (because V
�
en
�V
∼ nV ).
Lemma 8.3. The number �n(C, x1, . . . , xn) of subsets picked out by C is bounded by the number of subsets
shattered by C.
Proof. Without loss of generality, we restrict C to C := {C ∩ ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
and A is shattered by Ti(C), then ∀B ⊆ A, ∃C ∈ C, such that B {xi} = A Ti(C).
�
�
�
�
This means that xi ∈ Ti(C), and that C\{xi} ∈ C. Thus both B {xi} and B\{xi} are picked out by C.
Since either B = B {xi} or B = B\{xi}, B is picked out by C. Thus A is shattered by C.
Apply the operator T = T1 ◦ . . . Tn un... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
: X → R, C = {{x :
k=1
αkfk(x) > 0} : α1, . . . , αd ∈ R}
Theorem 9.1. V C(C) in the last example above is at most d.
Proof. Observation: For any {x1, . . . , xd+1} if we cannot shatter {x1, . . . , xd+1} ←→ ∃I ⊆ {1 . . . d + 1} s.t.
we cannot pick out {xi, i ∈ I}. If we can pick out {xi, i ∈ I}, then for some C ∈ ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
, i ∈ I}. Suppose we can: then ∃α1, . . . , αd s.t.
and �d
k=1 αkfk(xi) ≤ 0 for i /∈ I. But φ F (α) = 0 and so
·
�d
k=1
αkfk(xi) > 0 for i ∈ I
d
�
αkfk(x1) + . . . + φd+1
d
�
αkfk(xd+1) = 0.
φ1
k=1
k=1
Hence,
Contradiction.
d
�
� �
φi
�
k=1
i∈I
�
�
i /∈I
αkfk(xi) =
��
>0
�
(−φi)
� �� �
≥0 �... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
C ∪ D = {C ∪ D : C ∈ C, D ∈ D} is VC
19
Lecture 09
Properties of VC classes of sets.
18.465
(1) obvious - we can shatter x1, . . . , xn by C iff we can do the same by Cc .
(a) By Sauer’s Lemma,
�n(C ∩ D, x1, . . . , xn) ≤ �n(C, x1, . . . , xn)�n(C ∩ D, x1, . . . , xn)
� �V �
�V
≤
en
V
C
en
V
C
D D
≤ 2n
... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
I(Xi ∈ C) − P (C)
�
�
�
�
≥ t
≤ 2P
�
�
�
1
�
sup
�
�
n
C∈C
n
�
I(Xi ∈ C) −
i=1
1
n
n
�
i=1
I(Xi
�
�
� ∈ C) ≥ t/2
�
�
�
�
.
Proof. Suppose the event
�
�
1
�
sup
�
�
n
C∈C
�
occurs. Let X = (X1, . . . , Xn) ∈ {supC∈C
�
1
n
n
�
I(Xi ∈ C) − P (C) ≥ t
�
�
�
�
�
�
�
I(Xi ∈ C) − P (C)
i=1
�
n... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
1
i=1 I(Xi
t2
� ∈ C) − P (C))(I(Xj
� ∈ C) − P (C))
�2
=
4
n2t2
since we chose t ≥
�
2
. n
n
�
E(I(Xi
� ∈ C) − P (C))2
i=1
=
4nP (C) (1 − P (C)
n2t2
≤
1
nt2
≤
1
2
So,
if t ≥
�
PX �
�
�
�
1
�
�
�
n
n
�
i=1
I(Xi
� ∈ CX ) − P (CX ) ≤ t/2 ∃CX
≥ 1/2
�
�
�
�
�
�
�
�
�
�
�
2/n. Assume that th... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
2
�
�
∃CX
�
�
�
�
n
�
I(Xi
1
n
i=1
�
·
I (∃CX )
�
�
� ∈ C) ≥ t/2
�
�
�
�
.
Now, take expectation with respect to Xi’s to obtain
PX
�
�
�
1
�
sup
�
�
n
C∈C
n
�
i=1
≤ 2 ·
PX,X �
�
�
�
1
�
sup
�
�
n
C∈C
n
�
i=1
I(Xi ∈ C) −
1
n
Theorem 10.1.
If
V C
C
( ) =
V
, then
�
�
�
�
�
�
I(Xi... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
i=1
n
�
i=1
n
�
�
�
� ∈ C) ≥ t/2
�
�
�
�
�
� ∈ C)) ≥ t/2
�
�
�
�
�
� ∈ C)) ≥ t/2
�
�
�
�
�
�
.
= 2EX,X � Pε
εi (I(Xi ∈ C) − I(Xi
�
22
Lecture 10
Symmetrization. Pessimistic VC inequality.
18.465
� are i.i.d., and so switching their names (i.e. introducing
The first equality is due to the fact th... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
�
�
�
n
k=1
i=1
n
�
i=1
N
sup
1≤k≤N
�
≤
2E
N
�
Pε
k=1
��
�
1
�
�
�
n
n
�
i=1
�
union bound
εi (I(Xi ∈ Ck) − I(Xi
�
≥ t/2
≥ t/2
�
�
≥ t/2
� ∈ C))
�
�
�
�
�
�
�
� ∈ Ck))
�
�
�
�
�
� ∈ Ck))
�
�
�
�
�
� ∈ Ck)) ≥ t/2
�
�
�
�
�
≤ 2E
2 exp
−
N
�
k=1
Hoeffding’s inequality
−n2t2
(I(Xi ∈... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
n
t =
�
log 4 + V log
�
2en
V
+ u
as
�
P
�
�
1
�
sup
�
�
n
C
�
n
�
i=1
�
�
I(Xi ∈ C) − P (C)
�
�
�
�
8
n
≤
�
log 4 + V log
��
2en
V
+ u
≥ 1 − e−u
.
. In this lecture we will prove Optimistic VC inequality, which will improve on
Hence, the rate is
this rate when P (C) is small.
V log n
n
As b... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
−
1
n
n
�
I(Xi ∈ C) ≤
i=1
�
�
2P (C) t
n
≥ 1 − e−t
.
Theorem 11.1. [Optimistic VC inequality]
�
P
sup
C
P (C) −
I(Xi ∈ C)
�
n
1
i=1
n
�
P (C)
�
�
≥ t
≤ 4
�V
2en
V
2nt
4
e−
.
Proof. Let C be fixed. Then
�
1
n
P(Xi
�)
I(Xi
� ∈ C) ≥ P (C) ≥
�
1
4
whenever P (C) ≥
�
. Indeed, P (C) ≥
1 ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
∈ Ck) −
�
I
(Xi ∈ Ck))
n
i=1
I(Xi
� ∈ Ck)
�
�
n
1
t
� �
�
� ∈ Ck) − I(Xi ∈ Ck)) ≥ √
n
2
≥
t
√
2
⎠
i=1
≥ √
⎫⎞
⎠
t ⎬
2 ⎭
⎞
I(Xi ∈ Ck) +
1
n
⎞
� ∈ Ck)⎠
I(Xi
n
�
i=1
The last expression can be upper-bounded by Hoeffding’s inequality as follows:
⎛
1
Pε ⎝
n
N
�
E
k=1
n
�
i=1
�
εi (I(Xi
�
�... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.